00:00:00.001 Started by upstream project "autotest-per-patch" build number 132361 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.154 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.154 The recommended git tool is: git 00:00:00.155 using credential 00000000-0000-0000-0000-000000000002 00:00:00.157 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.196 Fetching changes from the remote Git repository 00:00:00.197 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.231 Using shallow fetch with depth 1 00:00:00.231 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.231 > git --version # timeout=10 00:00:00.256 > git --version # 'git version 2.39.2' 00:00:00.256 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.274 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.274 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.354 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.389 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.405 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.405 > git config core.sparsecheckout # timeout=10 00:00:07.416 > git read-tree -mu HEAD # timeout=10 00:00:07.431 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.450 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.450 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.544 [Pipeline] Start of Pipeline 00:00:07.556 [Pipeline] library 00:00:07.557 Loading library shm_lib@master 00:00:07.558 Library shm_lib@master is cached. Copying from home. 00:00:07.571 [Pipeline] node 00:00:07.580 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:07.582 [Pipeline] { 00:00:07.592 [Pipeline] catchError 00:00:07.593 [Pipeline] { 00:00:07.603 [Pipeline] wrap 00:00:07.610 [Pipeline] { 00:00:07.616 [Pipeline] stage 00:00:07.617 [Pipeline] { (Prologue) 00:00:07.631 [Pipeline] echo 00:00:07.632 Node: VM-host-SM17 00:00:07.636 [Pipeline] cleanWs 00:00:07.643 [WS-CLEANUP] Deleting project workspace... 00:00:07.643 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.649 [WS-CLEANUP] done 00:00:07.840 [Pipeline] setCustomBuildProperty 00:00:07.941 [Pipeline] httpRequest 00:00:08.301 [Pipeline] echo 00:00:08.302 Sorcerer 10.211.164.20 is alive 00:00:08.309 [Pipeline] retry 00:00:08.310 [Pipeline] { 00:00:08.320 [Pipeline] httpRequest 00:00:08.325 HttpMethod: GET 00:00:08.325 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.326 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.327 Response Code: HTTP/1.1 200 OK 00:00:08.328 Success: Status code 200 is in the accepted range: 200,404 00:00:08.328 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.327 [Pipeline] } 00:00:09.344 [Pipeline] // retry 00:00:09.352 [Pipeline] sh 00:00:09.630 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.644 [Pipeline] httpRequest 00:00:09.991 [Pipeline] echo 00:00:09.993 Sorcerer 10.211.164.20 is alive 00:00:10.003 [Pipeline] retry 00:00:10.005 [Pipeline] { 00:00:10.019 [Pipeline] httpRequest 00:00:10.023 HttpMethod: GET 00:00:10.023 URL: http://10.211.164.20/packages/spdk_717acfa62eb2b6321bcc0b4d71e0512da02d7ee6.tar.gz 00:00:10.024 Sending request to url: http://10.211.164.20/packages/spdk_717acfa62eb2b6321bcc0b4d71e0512da02d7ee6.tar.gz 00:00:10.036 Response Code: HTTP/1.1 200 OK 00:00:10.037 Success: Status code 200 is in the accepted range: 200,404 00:00:10.038 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_717acfa62eb2b6321bcc0b4d71e0512da02d7ee6.tar.gz 00:00:46.173 [Pipeline] } 00:00:46.193 [Pipeline] // retry 00:00:46.201 [Pipeline] sh 00:00:46.481 + tar --no-same-owner -xf spdk_717acfa62eb2b6321bcc0b4d71e0512da02d7ee6.tar.gz 00:00:49.781 [Pipeline] sh 00:00:50.063 + git -C spdk log --oneline -n5 00:00:50.063 717acfa62 test/common: Move nvme_namespace_revert() to nvme/functions.sh 00:00:50.063 f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:00:50.063 8d982eda9 dpdk: add adjustments for recent rte_power changes 00:00:50.063 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:00:50.063 73f18e890 lib/reduce: fix the magic number of empty mapping detection. 00:00:50.121 [Pipeline] writeFile 00:00:50.141 [Pipeline] sh 00:00:50.416 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:50.426 [Pipeline] sh 00:00:50.714 + cat autorun-spdk.conf 00:00:50.714 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.714 SPDK_TEST_NVMF=1 00:00:50.714 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.714 SPDK_TEST_URING=1 00:00:50.714 SPDK_TEST_USDT=1 00:00:50.714 SPDK_RUN_UBSAN=1 00:00:50.714 NET_TYPE=virt 00:00:50.714 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:50.737 RUN_NIGHTLY=0 00:00:50.739 [Pipeline] } 00:00:50.751 [Pipeline] // stage 00:00:50.765 [Pipeline] stage 00:00:50.767 [Pipeline] { (Run VM) 00:00:50.778 [Pipeline] sh 00:00:51.058 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:51.058 + echo 'Start stage prepare_nvme.sh' 00:00:51.058 Start stage prepare_nvme.sh 00:00:51.058 + [[ -n 0 ]] 00:00:51.058 + disk_prefix=ex0 00:00:51.058 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:51.058 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:51.058 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:51.058 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.058 ++ SPDK_TEST_NVMF=1 00:00:51.058 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:51.058 ++ SPDK_TEST_URING=1 00:00:51.058 ++ SPDK_TEST_USDT=1 00:00:51.058 ++ SPDK_RUN_UBSAN=1 00:00:51.058 ++ NET_TYPE=virt 00:00:51.058 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:51.058 ++ RUN_NIGHTLY=0 00:00:51.058 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:51.058 + nvme_files=() 00:00:51.058 + declare -A nvme_files 00:00:51.058 + backend_dir=/var/lib/libvirt/images/backends 00:00:51.058 + nvme_files['nvme.img']=5G 00:00:51.058 + nvme_files['nvme-cmb.img']=5G 00:00:51.058 + nvme_files['nvme-multi0.img']=4G 00:00:51.058 + nvme_files['nvme-multi1.img']=4G 00:00:51.058 + nvme_files['nvme-multi2.img']=4G 00:00:51.058 + nvme_files['nvme-openstack.img']=8G 00:00:51.058 + nvme_files['nvme-zns.img']=5G 00:00:51.058 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:51.058 + (( SPDK_TEST_FTL == 1 )) 00:00:51.058 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:51.058 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:51.058 + for nvme in "${!nvme_files[@]}" 00:00:51.058 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:00:51.058 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:51.058 + for nvme in "${!nvme_files[@]}" 00:00:51.058 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:00:51.058 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:51.058 + for nvme in "${!nvme_files[@]}" 00:00:51.058 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:00:51.058 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:51.058 + for nvme in "${!nvme_files[@]}" 00:00:51.058 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:00:51.058 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:51.058 + for nvme in "${!nvme_files[@]}" 00:00:51.058 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:00:51.058 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:51.058 + for nvme in "${!nvme_files[@]}" 00:00:51.058 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:00:51.058 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:51.058 + for nvme in "${!nvme_files[@]}" 00:00:51.058 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:00:51.058 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:51.058 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:00:51.058 + echo 'End stage prepare_nvme.sh' 00:00:51.058 End stage prepare_nvme.sh 00:00:51.069 [Pipeline] sh 00:00:51.347 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:51.347 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:00:51.347 00:00:51.347 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:51.347 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:51.347 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:51.347 HELP=0 00:00:51.347 DRY_RUN=0 00:00:51.347 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:00:51.347 NVME_DISKS_TYPE=nvme,nvme, 00:00:51.347 NVME_AUTO_CREATE=0 00:00:51.347 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:00:51.347 NVME_CMB=,, 00:00:51.347 NVME_PMR=,, 00:00:51.347 NVME_ZNS=,, 00:00:51.347 NVME_MS=,, 00:00:51.347 NVME_FDP=,, 00:00:51.347 SPDK_VAGRANT_DISTRO=fedora39 00:00:51.347 SPDK_VAGRANT_VMCPU=10 00:00:51.347 SPDK_VAGRANT_VMRAM=12288 00:00:51.347 SPDK_VAGRANT_PROVIDER=libvirt 00:00:51.347 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:51.347 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:51.347 SPDK_OPENSTACK_NETWORK=0 00:00:51.347 VAGRANT_PACKAGE_BOX=0 00:00:51.347 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:51.347 FORCE_DISTRO=true 00:00:51.347 VAGRANT_BOX_VERSION= 00:00:51.347 EXTRA_VAGRANTFILES= 00:00:51.347 NIC_MODEL=e1000 00:00:51.347 00:00:51.347 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:00:51.347 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:54.639 Bringing machine 'default' up with 'libvirt' provider... 00:00:54.639 ==> default: Creating image (snapshot of base box volume). 00:00:54.899 ==> default: Creating domain with the following settings... 00:00:54.899 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732090422_b79feca997d0ea056057 00:00:54.899 ==> default: -- Domain type: kvm 00:00:54.899 ==> default: -- Cpus: 10 00:00:54.899 ==> default: -- Feature: acpi 00:00:54.899 ==> default: -- Feature: apic 00:00:54.899 ==> default: -- Feature: pae 00:00:54.899 ==> default: -- Memory: 12288M 00:00:54.899 ==> default: -- Memory Backing: hugepages: 00:00:54.899 ==> default: -- Management MAC: 00:00:54.899 ==> default: -- Loader: 00:00:54.899 ==> default: -- Nvram: 00:00:54.899 ==> default: -- Base box: spdk/fedora39 00:00:54.899 ==> default: -- Storage pool: default 00:00:54.899 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732090422_b79feca997d0ea056057.img (20G) 00:00:54.899 ==> default: -- Volume Cache: default 00:00:54.899 ==> default: -- Kernel: 00:00:54.899 ==> default: -- Initrd: 00:00:54.899 ==> default: -- Graphics Type: vnc 00:00:54.899 ==> default: -- Graphics Port: -1 00:00:54.899 ==> default: -- Graphics IP: 127.0.0.1 00:00:54.899 ==> default: -- Graphics Password: Not defined 00:00:54.899 ==> default: -- Video Type: cirrus 00:00:54.899 ==> default: -- Video VRAM: 9216 00:00:54.899 ==> default: -- Sound Type: 00:00:54.899 ==> default: -- Keymap: en-us 00:00:54.899 ==> default: -- TPM Path: 00:00:54.899 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:54.899 ==> default: -- Command line args: 00:00:54.899 ==> default: -> value=-device, 00:00:54.899 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:54.899 ==> default: -> value=-drive, 00:00:54.899 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:00:54.899 ==> default: -> value=-device, 00:00:54.899 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:54.899 ==> default: -> value=-device, 00:00:54.899 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:54.899 ==> default: -> value=-drive, 00:00:54.899 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:54.899 ==> default: -> value=-device, 00:00:54.899 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:54.899 ==> default: -> value=-drive, 00:00:54.899 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:54.899 ==> default: -> value=-device, 00:00:54.899 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:54.899 ==> default: -> value=-drive, 00:00:54.899 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:54.899 ==> default: -> value=-device, 00:00:54.899 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:55.158 ==> default: Creating shared folders metadata... 00:00:55.158 ==> default: Starting domain. 00:00:56.536 ==> default: Waiting for domain to get an IP address... 00:01:14.631 ==> default: Waiting for SSH to become available... 00:01:14.631 ==> default: Configuring and enabling network interfaces... 00:01:17.165 default: SSH address: 192.168.121.155:22 00:01:17.165 default: SSH username: vagrant 00:01:17.165 default: SSH auth method: private key 00:01:19.071 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:27.205 ==> default: Mounting SSHFS shared folder... 00:01:28.579 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:28.579 ==> default: Checking Mount.. 00:01:29.953 ==> default: Folder Successfully Mounted! 00:01:29.953 ==> default: Running provisioner: file... 00:01:30.518 default: ~/.gitconfig => .gitconfig 00:01:31.085 00:01:31.085 SUCCESS! 00:01:31.085 00:01:31.085 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:31.085 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:31.085 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:31.085 00:01:31.103 [Pipeline] } 00:01:31.119 [Pipeline] // stage 00:01:31.131 [Pipeline] dir 00:01:31.132 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:31.134 [Pipeline] { 00:01:31.146 [Pipeline] catchError 00:01:31.148 [Pipeline] { 00:01:31.160 [Pipeline] sh 00:01:31.490 + vagrant ssh-config --host vagrant 00:01:31.490 + sed -ne /^Host/,$p 00:01:31.490 + tee ssh_conf 00:01:34.776 Host vagrant 00:01:34.776 HostName 192.168.121.155 00:01:34.776 User vagrant 00:01:34.776 Port 22 00:01:34.776 UserKnownHostsFile /dev/null 00:01:34.776 StrictHostKeyChecking no 00:01:34.776 PasswordAuthentication no 00:01:34.776 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:34.776 IdentitiesOnly yes 00:01:34.776 LogLevel FATAL 00:01:34.776 ForwardAgent yes 00:01:34.776 ForwardX11 yes 00:01:34.776 00:01:34.791 [Pipeline] withEnv 00:01:34.793 [Pipeline] { 00:01:34.807 [Pipeline] sh 00:01:35.086 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:35.086 source /etc/os-release 00:01:35.086 [[ -e /image.version ]] && img=$(< /image.version) 00:01:35.086 # Minimal, systemd-like check. 00:01:35.086 if [[ -e /.dockerenv ]]; then 00:01:35.086 # Clear garbage from the node's name: 00:01:35.086 # agt-er_autotest_547-896 -> autotest_547-896 00:01:35.086 # $HOSTNAME is the actual container id 00:01:35.086 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:35.086 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:35.086 # We can assume this is a mount from a host where container is running, 00:01:35.086 # so fetch its hostname to easily identify the target swarm worker. 00:01:35.086 container="$(< /etc/hostname) ($agent)" 00:01:35.086 else 00:01:35.086 # Fallback 00:01:35.086 container=$agent 00:01:35.086 fi 00:01:35.086 fi 00:01:35.086 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:35.086 00:01:35.355 [Pipeline] } 00:01:35.369 [Pipeline] // withEnv 00:01:35.379 [Pipeline] setCustomBuildProperty 00:01:35.393 [Pipeline] stage 00:01:35.395 [Pipeline] { (Tests) 00:01:35.411 [Pipeline] sh 00:01:35.692 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:35.704 [Pipeline] sh 00:01:35.999 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:36.013 [Pipeline] timeout 00:01:36.013 Timeout set to expire in 1 hr 0 min 00:01:36.015 [Pipeline] { 00:01:36.028 [Pipeline] sh 00:01:36.308 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:36.875 HEAD is now at 717acfa62 test/common: Move nvme_namespace_revert() to nvme/functions.sh 00:01:36.887 [Pipeline] sh 00:01:37.167 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:37.439 [Pipeline] sh 00:01:37.718 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:37.733 [Pipeline] sh 00:01:38.012 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:38.012 ++ readlink -f spdk_repo 00:01:38.271 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:38.271 + [[ -n /home/vagrant/spdk_repo ]] 00:01:38.271 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:38.271 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:38.271 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:38.271 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:38.271 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:38.271 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:38.271 + cd /home/vagrant/spdk_repo 00:01:38.271 + source /etc/os-release 00:01:38.271 ++ NAME='Fedora Linux' 00:01:38.271 ++ VERSION='39 (Cloud Edition)' 00:01:38.271 ++ ID=fedora 00:01:38.271 ++ VERSION_ID=39 00:01:38.271 ++ VERSION_CODENAME= 00:01:38.271 ++ PLATFORM_ID=platform:f39 00:01:38.271 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:38.271 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:38.271 ++ LOGO=fedora-logo-icon 00:01:38.271 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:38.271 ++ HOME_URL=https://fedoraproject.org/ 00:01:38.271 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:38.271 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:38.271 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:38.271 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:38.271 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:38.271 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:38.271 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:38.271 ++ SUPPORT_END=2024-11-12 00:01:38.271 ++ VARIANT='Cloud Edition' 00:01:38.271 ++ VARIANT_ID=cloud 00:01:38.271 + uname -a 00:01:38.271 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:38.271 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:38.530 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:38.530 Hugepages 00:01:38.530 node hugesize free / total 00:01:38.530 node0 1048576kB 0 / 0 00:01:38.530 node0 2048kB 0 / 0 00:01:38.530 00:01:38.530 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:38.789 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:38.789 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:38.789 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:38.789 + rm -f /tmp/spdk-ld-path 00:01:38.789 + source autorun-spdk.conf 00:01:38.789 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.789 ++ SPDK_TEST_NVMF=1 00:01:38.789 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:38.789 ++ SPDK_TEST_URING=1 00:01:38.789 ++ SPDK_TEST_USDT=1 00:01:38.789 ++ SPDK_RUN_UBSAN=1 00:01:38.789 ++ NET_TYPE=virt 00:01:38.789 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:38.789 ++ RUN_NIGHTLY=0 00:01:38.789 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:38.789 + [[ -n '' ]] 00:01:38.789 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:38.789 + for M in /var/spdk/build-*-manifest.txt 00:01:38.789 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:38.789 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:38.789 + for M in /var/spdk/build-*-manifest.txt 00:01:38.789 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:38.789 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:38.789 + for M in /var/spdk/build-*-manifest.txt 00:01:38.789 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:38.789 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:38.789 ++ uname 00:01:38.789 + [[ Linux == \L\i\n\u\x ]] 00:01:38.789 + sudo dmesg -T 00:01:38.789 + sudo dmesg --clear 00:01:38.789 + dmesg_pid=5199 00:01:38.790 + sudo dmesg -Tw 00:01:38.790 + [[ Fedora Linux == FreeBSD ]] 00:01:38.790 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:38.790 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:38.790 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:38.790 + [[ -x /usr/src/fio-static/fio ]] 00:01:38.790 + export FIO_BIN=/usr/src/fio-static/fio 00:01:38.790 + FIO_BIN=/usr/src/fio-static/fio 00:01:38.790 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:38.790 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:38.790 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:38.790 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:38.790 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:38.790 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:38.790 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:38.790 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:38.790 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:38.790 08:14:26 -- common/autotest_common.sh@1637 -- $ [[ n == y ]] 00:01:38.790 08:14:26 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:38.790 08:14:26 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.790 08:14:26 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:38.790 08:14:26 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:38.790 08:14:26 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:01:38.790 08:14:26 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:01:38.790 08:14:26 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:38.790 08:14:26 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:01:38.790 08:14:26 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:38.790 08:14:26 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:38.790 08:14:26 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:38.790 08:14:26 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:39.049 08:14:26 -- common/autotest_common.sh@1637 -- $ [[ n == y ]] 00:01:39.049 08:14:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:39.049 08:14:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:39.049 08:14:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:39.049 08:14:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:39.049 08:14:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:39.049 08:14:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.049 08:14:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.049 08:14:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.049 08:14:26 -- paths/export.sh@5 -- $ export PATH 00:01:39.049 08:14:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.049 08:14:26 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:39.049 08:14:26 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:39.049 08:14:26 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732090466.XXXXXX 00:01:39.049 08:14:26 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732090466.eqnpSL 00:01:39.049 08:14:26 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:39.049 08:14:26 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:39.049 08:14:26 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:39.049 08:14:26 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:39.049 08:14:26 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:39.049 08:14:26 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:39.049 08:14:26 -- common/autotest_common.sh@412 -- $ xtrace_disable 00:01:39.049 08:14:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.049 08:14:26 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:39.049 08:14:26 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:39.049 08:14:26 -- pm/common@17 -- $ local monitor 00:01:39.049 08:14:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.049 08:14:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.049 08:14:26 -- pm/common@25 -- $ sleep 1 00:01:39.049 08:14:26 -- pm/common@21 -- $ date +%s 00:01:39.049 08:14:26 -- pm/common@21 -- $ date +%s 00:01:39.049 08:14:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732090466 00:01:39.049 08:14:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732090466 00:01:39.049 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732090466_collect-cpu-load.pm.log 00:01:39.049 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732090466_collect-vmstat.pm.log 00:01:39.985 08:14:27 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:39.985 08:14:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:39.985 08:14:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:39.985 08:14:27 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:39.985 08:14:27 -- spdk/autobuild.sh@16 -- $ date -u 00:01:39.985 Wed Nov 20 08:14:27 AM UTC 2024 00:01:39.985 08:14:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:39.985 v25.01-pre-200-g717acfa62 00:01:39.985 08:14:27 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:39.985 08:14:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:39.986 08:14:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:39.986 08:14:27 -- common/autotest_common.sh@1108 -- $ '[' 3 -le 1 ']' 00:01:39.986 08:14:27 -- common/autotest_common.sh@1114 -- $ xtrace_disable 00:01:39.986 08:14:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.986 ************************************ 00:01:39.986 START TEST ubsan 00:01:39.986 ************************************ 00:01:39.986 using ubsan 00:01:39.986 08:14:27 ubsan -- common/autotest_common.sh@1132 -- $ echo 'using ubsan' 00:01:39.986 00:01:39.986 real 0m0.001s 00:01:39.986 user 0m0.000s 00:01:39.986 sys 0m0.000s 00:01:39.986 08:14:27 ubsan -- common/autotest_common.sh@1133 -- $ xtrace_disable 00:01:39.986 08:14:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:39.986 ************************************ 00:01:39.986 END TEST ubsan 00:01:39.986 ************************************ 00:01:40.244 08:14:27 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:40.244 08:14:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:40.244 08:14:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:40.244 08:14:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:40.244 08:14:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:40.244 08:14:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:40.244 08:14:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:40.244 08:14:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:40.244 08:14:27 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:40.244 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:40.244 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:40.814 Using 'verbs' RDMA provider 00:01:53.959 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:08.860 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:08.860 Creating mk/config.mk...done. 00:02:08.860 Creating mk/cc.flags.mk...done. 00:02:08.860 Type 'make' to build. 00:02:08.860 08:14:54 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:08.860 08:14:54 -- common/autotest_common.sh@1108 -- $ '[' 3 -le 1 ']' 00:02:08.860 08:14:54 -- common/autotest_common.sh@1114 -- $ xtrace_disable 00:02:08.860 08:14:54 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.860 ************************************ 00:02:08.860 START TEST make 00:02:08.860 ************************************ 00:02:08.860 08:14:54 make -- common/autotest_common.sh@1132 -- $ make -j10 00:02:08.860 make[1]: Nothing to be done for 'all'. 00:02:21.112 The Meson build system 00:02:21.112 Version: 1.5.0 00:02:21.112 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:21.112 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:21.112 Build type: native build 00:02:21.112 Program cat found: YES (/usr/bin/cat) 00:02:21.112 Project name: DPDK 00:02:21.112 Project version: 24.03.0 00:02:21.112 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:21.112 C linker for the host machine: cc ld.bfd 2.40-14 00:02:21.112 Host machine cpu family: x86_64 00:02:21.112 Host machine cpu: x86_64 00:02:21.112 Message: ## Building in Developer Mode ## 00:02:21.112 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:21.112 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:21.112 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:21.112 Program python3 found: YES (/usr/bin/python3) 00:02:21.112 Program cat found: YES (/usr/bin/cat) 00:02:21.112 Compiler for C supports arguments -march=native: YES 00:02:21.112 Checking for size of "void *" : 8 00:02:21.112 Checking for size of "void *" : 8 (cached) 00:02:21.112 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:21.112 Library m found: YES 00:02:21.112 Library numa found: YES 00:02:21.112 Has header "numaif.h" : YES 00:02:21.112 Library fdt found: NO 00:02:21.112 Library execinfo found: NO 00:02:21.112 Has header "execinfo.h" : YES 00:02:21.112 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:21.112 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:21.112 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:21.112 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:21.112 Run-time dependency openssl found: YES 3.1.1 00:02:21.112 Run-time dependency libpcap found: YES 1.10.4 00:02:21.112 Has header "pcap.h" with dependency libpcap: YES 00:02:21.112 Compiler for C supports arguments -Wcast-qual: YES 00:02:21.112 Compiler for C supports arguments -Wdeprecated: YES 00:02:21.112 Compiler for C supports arguments -Wformat: YES 00:02:21.112 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:21.112 Compiler for C supports arguments -Wformat-security: NO 00:02:21.112 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:21.112 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:21.112 Compiler for C supports arguments -Wnested-externs: YES 00:02:21.112 Compiler for C supports arguments -Wold-style-definition: YES 00:02:21.112 Compiler for C supports arguments -Wpointer-arith: YES 00:02:21.112 Compiler for C supports arguments -Wsign-compare: YES 00:02:21.112 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:21.112 Compiler for C supports arguments -Wundef: YES 00:02:21.112 Compiler for C supports arguments -Wwrite-strings: YES 00:02:21.112 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:21.112 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:21.112 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:21.112 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:21.112 Program objdump found: YES (/usr/bin/objdump) 00:02:21.112 Compiler for C supports arguments -mavx512f: YES 00:02:21.112 Checking if "AVX512 checking" compiles: YES 00:02:21.112 Fetching value of define "__SSE4_2__" : 1 00:02:21.112 Fetching value of define "__AES__" : 1 00:02:21.112 Fetching value of define "__AVX__" : 1 00:02:21.112 Fetching value of define "__AVX2__" : 1 00:02:21.112 Fetching value of define "__AVX512BW__" : (undefined) 00:02:21.112 Fetching value of define "__AVX512CD__" : (undefined) 00:02:21.112 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:21.112 Fetching value of define "__AVX512F__" : (undefined) 00:02:21.112 Fetching value of define "__AVX512VL__" : (undefined) 00:02:21.112 Fetching value of define "__PCLMUL__" : 1 00:02:21.112 Fetching value of define "__RDRND__" : 1 00:02:21.112 Fetching value of define "__RDSEED__" : 1 00:02:21.112 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:21.112 Fetching value of define "__znver1__" : (undefined) 00:02:21.112 Fetching value of define "__znver2__" : (undefined) 00:02:21.112 Fetching value of define "__znver3__" : (undefined) 00:02:21.112 Fetching value of define "__znver4__" : (undefined) 00:02:21.112 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:21.112 Message: lib/log: Defining dependency "log" 00:02:21.112 Message: lib/kvargs: Defining dependency "kvargs" 00:02:21.112 Message: lib/telemetry: Defining dependency "telemetry" 00:02:21.112 Checking for function "getentropy" : NO 00:02:21.112 Message: lib/eal: Defining dependency "eal" 00:02:21.112 Message: lib/ring: Defining dependency "ring" 00:02:21.112 Message: lib/rcu: Defining dependency "rcu" 00:02:21.112 Message: lib/mempool: Defining dependency "mempool" 00:02:21.112 Message: lib/mbuf: Defining dependency "mbuf" 00:02:21.112 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:21.112 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:21.112 Compiler for C supports arguments -mpclmul: YES 00:02:21.112 Compiler for C supports arguments -maes: YES 00:02:21.112 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:21.112 Compiler for C supports arguments -mavx512bw: YES 00:02:21.112 Compiler for C supports arguments -mavx512dq: YES 00:02:21.112 Compiler for C supports arguments -mavx512vl: YES 00:02:21.112 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:21.112 Compiler for C supports arguments -mavx2: YES 00:02:21.112 Compiler for C supports arguments -mavx: YES 00:02:21.112 Message: lib/net: Defining dependency "net" 00:02:21.112 Message: lib/meter: Defining dependency "meter" 00:02:21.112 Message: lib/ethdev: Defining dependency "ethdev" 00:02:21.112 Message: lib/pci: Defining dependency "pci" 00:02:21.112 Message: lib/cmdline: Defining dependency "cmdline" 00:02:21.112 Message: lib/hash: Defining dependency "hash" 00:02:21.112 Message: lib/timer: Defining dependency "timer" 00:02:21.112 Message: lib/compressdev: Defining dependency "compressdev" 00:02:21.112 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:21.112 Message: lib/dmadev: Defining dependency "dmadev" 00:02:21.112 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:21.113 Message: lib/power: Defining dependency "power" 00:02:21.113 Message: lib/reorder: Defining dependency "reorder" 00:02:21.113 Message: lib/security: Defining dependency "security" 00:02:21.113 Has header "linux/userfaultfd.h" : YES 00:02:21.113 Has header "linux/vduse.h" : YES 00:02:21.113 Message: lib/vhost: Defining dependency "vhost" 00:02:21.113 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:21.113 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:21.113 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:21.113 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:21.113 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:21.113 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:21.113 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:21.113 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:21.113 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:21.113 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:21.113 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:21.113 Configuring doxy-api-html.conf using configuration 00:02:21.113 Configuring doxy-api-man.conf using configuration 00:02:21.113 Program mandb found: YES (/usr/bin/mandb) 00:02:21.113 Program sphinx-build found: NO 00:02:21.113 Configuring rte_build_config.h using configuration 00:02:21.113 Message: 00:02:21.113 ================= 00:02:21.113 Applications Enabled 00:02:21.113 ================= 00:02:21.113 00:02:21.113 apps: 00:02:21.113 00:02:21.113 00:02:21.113 Message: 00:02:21.113 ================= 00:02:21.113 Libraries Enabled 00:02:21.113 ================= 00:02:21.113 00:02:21.113 libs: 00:02:21.113 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:21.113 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:21.113 cryptodev, dmadev, power, reorder, security, vhost, 00:02:21.113 00:02:21.113 Message: 00:02:21.113 =============== 00:02:21.113 Drivers Enabled 00:02:21.113 =============== 00:02:21.113 00:02:21.113 common: 00:02:21.113 00:02:21.113 bus: 00:02:21.113 pci, vdev, 00:02:21.113 mempool: 00:02:21.113 ring, 00:02:21.113 dma: 00:02:21.113 00:02:21.113 net: 00:02:21.113 00:02:21.113 crypto: 00:02:21.113 00:02:21.113 compress: 00:02:21.113 00:02:21.113 vdpa: 00:02:21.113 00:02:21.113 00:02:21.113 Message: 00:02:21.113 ================= 00:02:21.113 Content Skipped 00:02:21.113 ================= 00:02:21.113 00:02:21.113 apps: 00:02:21.113 dumpcap: explicitly disabled via build config 00:02:21.113 graph: explicitly disabled via build config 00:02:21.113 pdump: explicitly disabled via build config 00:02:21.113 proc-info: explicitly disabled via build config 00:02:21.113 test-acl: explicitly disabled via build config 00:02:21.113 test-bbdev: explicitly disabled via build config 00:02:21.113 test-cmdline: explicitly disabled via build config 00:02:21.113 test-compress-perf: explicitly disabled via build config 00:02:21.113 test-crypto-perf: explicitly disabled via build config 00:02:21.113 test-dma-perf: explicitly disabled via build config 00:02:21.113 test-eventdev: explicitly disabled via build config 00:02:21.113 test-fib: explicitly disabled via build config 00:02:21.113 test-flow-perf: explicitly disabled via build config 00:02:21.113 test-gpudev: explicitly disabled via build config 00:02:21.113 test-mldev: explicitly disabled via build config 00:02:21.113 test-pipeline: explicitly disabled via build config 00:02:21.113 test-pmd: explicitly disabled via build config 00:02:21.113 test-regex: explicitly disabled via build config 00:02:21.113 test-sad: explicitly disabled via build config 00:02:21.113 test-security-perf: explicitly disabled via build config 00:02:21.113 00:02:21.113 libs: 00:02:21.113 argparse: explicitly disabled via build config 00:02:21.113 metrics: explicitly disabled via build config 00:02:21.113 acl: explicitly disabled via build config 00:02:21.113 bbdev: explicitly disabled via build config 00:02:21.113 bitratestats: explicitly disabled via build config 00:02:21.113 bpf: explicitly disabled via build config 00:02:21.113 cfgfile: explicitly disabled via build config 00:02:21.113 distributor: explicitly disabled via build config 00:02:21.113 efd: explicitly disabled via build config 00:02:21.113 eventdev: explicitly disabled via build config 00:02:21.113 dispatcher: explicitly disabled via build config 00:02:21.113 gpudev: explicitly disabled via build config 00:02:21.113 gro: explicitly disabled via build config 00:02:21.113 gso: explicitly disabled via build config 00:02:21.113 ip_frag: explicitly disabled via build config 00:02:21.113 jobstats: explicitly disabled via build config 00:02:21.113 latencystats: explicitly disabled via build config 00:02:21.113 lpm: explicitly disabled via build config 00:02:21.113 member: explicitly disabled via build config 00:02:21.113 pcapng: explicitly disabled via build config 00:02:21.113 rawdev: explicitly disabled via build config 00:02:21.113 regexdev: explicitly disabled via build config 00:02:21.113 mldev: explicitly disabled via build config 00:02:21.113 rib: explicitly disabled via build config 00:02:21.113 sched: explicitly disabled via build config 00:02:21.113 stack: explicitly disabled via build config 00:02:21.113 ipsec: explicitly disabled via build config 00:02:21.113 pdcp: explicitly disabled via build config 00:02:21.113 fib: explicitly disabled via build config 00:02:21.113 port: explicitly disabled via build config 00:02:21.113 pdump: explicitly disabled via build config 00:02:21.113 table: explicitly disabled via build config 00:02:21.113 pipeline: explicitly disabled via build config 00:02:21.113 graph: explicitly disabled via build config 00:02:21.113 node: explicitly disabled via build config 00:02:21.113 00:02:21.113 drivers: 00:02:21.113 common/cpt: not in enabled drivers build config 00:02:21.113 common/dpaax: not in enabled drivers build config 00:02:21.113 common/iavf: not in enabled drivers build config 00:02:21.113 common/idpf: not in enabled drivers build config 00:02:21.113 common/ionic: not in enabled drivers build config 00:02:21.113 common/mvep: not in enabled drivers build config 00:02:21.113 common/octeontx: not in enabled drivers build config 00:02:21.113 bus/auxiliary: not in enabled drivers build config 00:02:21.113 bus/cdx: not in enabled drivers build config 00:02:21.113 bus/dpaa: not in enabled drivers build config 00:02:21.113 bus/fslmc: not in enabled drivers build config 00:02:21.113 bus/ifpga: not in enabled drivers build config 00:02:21.113 bus/platform: not in enabled drivers build config 00:02:21.113 bus/uacce: not in enabled drivers build config 00:02:21.113 bus/vmbus: not in enabled drivers build config 00:02:21.113 common/cnxk: not in enabled drivers build config 00:02:21.113 common/mlx5: not in enabled drivers build config 00:02:21.113 common/nfp: not in enabled drivers build config 00:02:21.113 common/nitrox: not in enabled drivers build config 00:02:21.113 common/qat: not in enabled drivers build config 00:02:21.113 common/sfc_efx: not in enabled drivers build config 00:02:21.113 mempool/bucket: not in enabled drivers build config 00:02:21.113 mempool/cnxk: not in enabled drivers build config 00:02:21.113 mempool/dpaa: not in enabled drivers build config 00:02:21.113 mempool/dpaa2: not in enabled drivers build config 00:02:21.113 mempool/octeontx: not in enabled drivers build config 00:02:21.113 mempool/stack: not in enabled drivers build config 00:02:21.113 dma/cnxk: not in enabled drivers build config 00:02:21.113 dma/dpaa: not in enabled drivers build config 00:02:21.113 dma/dpaa2: not in enabled drivers build config 00:02:21.113 dma/hisilicon: not in enabled drivers build config 00:02:21.113 dma/idxd: not in enabled drivers build config 00:02:21.113 dma/ioat: not in enabled drivers build config 00:02:21.113 dma/skeleton: not in enabled drivers build config 00:02:21.113 net/af_packet: not in enabled drivers build config 00:02:21.113 net/af_xdp: not in enabled drivers build config 00:02:21.113 net/ark: not in enabled drivers build config 00:02:21.113 net/atlantic: not in enabled drivers build config 00:02:21.113 net/avp: not in enabled drivers build config 00:02:21.113 net/axgbe: not in enabled drivers build config 00:02:21.113 net/bnx2x: not in enabled drivers build config 00:02:21.113 net/bnxt: not in enabled drivers build config 00:02:21.113 net/bonding: not in enabled drivers build config 00:02:21.113 net/cnxk: not in enabled drivers build config 00:02:21.113 net/cpfl: not in enabled drivers build config 00:02:21.113 net/cxgbe: not in enabled drivers build config 00:02:21.113 net/dpaa: not in enabled drivers build config 00:02:21.113 net/dpaa2: not in enabled drivers build config 00:02:21.113 net/e1000: not in enabled drivers build config 00:02:21.113 net/ena: not in enabled drivers build config 00:02:21.113 net/enetc: not in enabled drivers build config 00:02:21.113 net/enetfec: not in enabled drivers build config 00:02:21.113 net/enic: not in enabled drivers build config 00:02:21.113 net/failsafe: not in enabled drivers build config 00:02:21.113 net/fm10k: not in enabled drivers build config 00:02:21.113 net/gve: not in enabled drivers build config 00:02:21.113 net/hinic: not in enabled drivers build config 00:02:21.113 net/hns3: not in enabled drivers build config 00:02:21.113 net/i40e: not in enabled drivers build config 00:02:21.113 net/iavf: not in enabled drivers build config 00:02:21.113 net/ice: not in enabled drivers build config 00:02:21.113 net/idpf: not in enabled drivers build config 00:02:21.113 net/igc: not in enabled drivers build config 00:02:21.113 net/ionic: not in enabled drivers build config 00:02:21.113 net/ipn3ke: not in enabled drivers build config 00:02:21.113 net/ixgbe: not in enabled drivers build config 00:02:21.113 net/mana: not in enabled drivers build config 00:02:21.113 net/memif: not in enabled drivers build config 00:02:21.113 net/mlx4: not in enabled drivers build config 00:02:21.113 net/mlx5: not in enabled drivers build config 00:02:21.113 net/mvneta: not in enabled drivers build config 00:02:21.113 net/mvpp2: not in enabled drivers build config 00:02:21.113 net/netvsc: not in enabled drivers build config 00:02:21.113 net/nfb: not in enabled drivers build config 00:02:21.113 net/nfp: not in enabled drivers build config 00:02:21.114 net/ngbe: not in enabled drivers build config 00:02:21.114 net/null: not in enabled drivers build config 00:02:21.114 net/octeontx: not in enabled drivers build config 00:02:21.114 net/octeon_ep: not in enabled drivers build config 00:02:21.114 net/pcap: not in enabled drivers build config 00:02:21.114 net/pfe: not in enabled drivers build config 00:02:21.114 net/qede: not in enabled drivers build config 00:02:21.114 net/ring: not in enabled drivers build config 00:02:21.114 net/sfc: not in enabled drivers build config 00:02:21.114 net/softnic: not in enabled drivers build config 00:02:21.114 net/tap: not in enabled drivers build config 00:02:21.114 net/thunderx: not in enabled drivers build config 00:02:21.114 net/txgbe: not in enabled drivers build config 00:02:21.114 net/vdev_netvsc: not in enabled drivers build config 00:02:21.114 net/vhost: not in enabled drivers build config 00:02:21.114 net/virtio: not in enabled drivers build config 00:02:21.114 net/vmxnet3: not in enabled drivers build config 00:02:21.114 raw/*: missing internal dependency, "rawdev" 00:02:21.114 crypto/armv8: not in enabled drivers build config 00:02:21.114 crypto/bcmfs: not in enabled drivers build config 00:02:21.114 crypto/caam_jr: not in enabled drivers build config 00:02:21.114 crypto/ccp: not in enabled drivers build config 00:02:21.114 crypto/cnxk: not in enabled drivers build config 00:02:21.114 crypto/dpaa_sec: not in enabled drivers build config 00:02:21.114 crypto/dpaa2_sec: not in enabled drivers build config 00:02:21.114 crypto/ipsec_mb: not in enabled drivers build config 00:02:21.114 crypto/mlx5: not in enabled drivers build config 00:02:21.114 crypto/mvsam: not in enabled drivers build config 00:02:21.114 crypto/nitrox: not in enabled drivers build config 00:02:21.114 crypto/null: not in enabled drivers build config 00:02:21.114 crypto/octeontx: not in enabled drivers build config 00:02:21.114 crypto/openssl: not in enabled drivers build config 00:02:21.114 crypto/scheduler: not in enabled drivers build config 00:02:21.114 crypto/uadk: not in enabled drivers build config 00:02:21.114 crypto/virtio: not in enabled drivers build config 00:02:21.114 compress/isal: not in enabled drivers build config 00:02:21.114 compress/mlx5: not in enabled drivers build config 00:02:21.114 compress/nitrox: not in enabled drivers build config 00:02:21.114 compress/octeontx: not in enabled drivers build config 00:02:21.114 compress/zlib: not in enabled drivers build config 00:02:21.114 regex/*: missing internal dependency, "regexdev" 00:02:21.114 ml/*: missing internal dependency, "mldev" 00:02:21.114 vdpa/ifc: not in enabled drivers build config 00:02:21.114 vdpa/mlx5: not in enabled drivers build config 00:02:21.114 vdpa/nfp: not in enabled drivers build config 00:02:21.114 vdpa/sfc: not in enabled drivers build config 00:02:21.114 event/*: missing internal dependency, "eventdev" 00:02:21.114 baseband/*: missing internal dependency, "bbdev" 00:02:21.114 gpu/*: missing internal dependency, "gpudev" 00:02:21.114 00:02:21.114 00:02:21.114 Build targets in project: 85 00:02:21.114 00:02:21.114 DPDK 24.03.0 00:02:21.114 00:02:21.114 User defined options 00:02:21.114 buildtype : debug 00:02:21.114 default_library : shared 00:02:21.114 libdir : lib 00:02:21.114 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:21.114 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:21.114 c_link_args : 00:02:21.114 cpu_instruction_set: native 00:02:21.114 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:21.114 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:21.114 enable_docs : false 00:02:21.114 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:21.114 enable_kmods : false 00:02:21.114 max_lcores : 128 00:02:21.114 tests : false 00:02:21.114 00:02:21.114 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:21.114 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:21.114 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:21.114 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:21.114 [3/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:21.114 [4/268] Linking static target lib/librte_kvargs.a 00:02:21.114 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:21.114 [6/268] Linking static target lib/librte_log.a 00:02:21.114 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.114 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:21.114 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:21.114 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:21.114 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:21.114 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:21.114 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:21.114 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:21.114 [15/268] Linking static target lib/librte_telemetry.a 00:02:21.114 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:21.114 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:21.114 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:21.373 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.373 [20/268] Linking target lib/librte_log.so.24.1 00:02:21.632 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:21.632 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:21.891 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:21.891 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:21.891 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:21.891 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:21.891 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:21.891 [28/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:21.891 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.891 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:22.149 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:22.149 [32/268] Linking target lib/librte_telemetry.so.24.1 00:02:22.149 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:22.149 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:22.408 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:22.408 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:22.408 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:22.667 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:22.667 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:22.667 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:22.667 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:22.925 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:22.925 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:22.925 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:23.184 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:23.184 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:23.184 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:23.184 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:23.443 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:23.443 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:23.443 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:23.702 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:23.702 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:23.961 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:23.961 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:23.961 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:23.961 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:24.220 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:24.220 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:24.220 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:24.220 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:24.479 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:24.739 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:24.739 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:24.739 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:24.739 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:24.739 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:24.998 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:24.998 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:25.257 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:25.257 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:25.257 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:25.257 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:25.257 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:25.515 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:25.515 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:25.515 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:25.515 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:25.515 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:25.774 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:25.774 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:25.774 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:26.033 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:26.033 [84/268] Linking static target lib/librte_ring.a 00:02:26.292 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:26.292 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:26.292 [87/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:26.292 [88/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:26.292 [89/268] Linking static target lib/librte_rcu.a 00:02:26.292 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:26.292 [91/268] Linking static target lib/librte_eal.a 00:02:26.551 [92/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.551 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:26.551 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:26.551 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:26.551 [96/268] Linking static target lib/librte_mempool.a 00:02:26.812 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:26.812 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:26.812 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.812 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:26.812 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:27.072 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:27.331 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:27.331 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:27.331 [105/268] Linking static target lib/librte_mbuf.a 00:02:27.331 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:27.331 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:27.331 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:27.331 [109/268] Linking static target lib/librte_net.a 00:02:27.590 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:27.590 [111/268] Linking static target lib/librte_meter.a 00:02:27.849 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:27.849 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:27.849 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.849 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.849 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.109 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:28.109 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:28.368 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.368 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:28.627 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:28.627 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:28.886 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:29.145 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:29.145 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:29.145 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:29.145 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:29.145 [128/268] Linking static target lib/librte_pci.a 00:02:29.404 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:29.404 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:29.404 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:29.404 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:29.404 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:29.404 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:29.404 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:29.404 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:29.404 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:29.404 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:29.404 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:29.663 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:29.663 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:29.663 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:29.663 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:29.663 [144/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.663 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:29.922 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:30.184 [147/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:30.184 [148/268] Linking static target lib/librte_timer.a 00:02:30.184 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:30.442 [150/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:30.442 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:30.442 [152/268] Linking static target lib/librte_cmdline.a 00:02:30.442 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:30.442 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:30.701 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:30.701 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:30.701 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:30.701 [158/268] Linking static target lib/librte_ethdev.a 00:02:30.701 [159/268] Linking static target lib/librte_hash.a 00:02:30.960 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.960 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:30.960 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:30.960 [163/268] Linking static target lib/librte_compressdev.a 00:02:31.219 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:31.219 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:31.479 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:31.479 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:31.479 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:31.479 [169/268] Linking static target lib/librte_dmadev.a 00:02:31.738 [170/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:31.738 [171/268] Linking static target lib/librte_cryptodev.a 00:02:31.738 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:31.738 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:31.997 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:31.997 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.997 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.997 [177/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.997 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:32.564 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.564 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:32.564 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:32.564 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:32.564 [183/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:32.564 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:32.564 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:32.822 [186/268] Linking static target lib/librte_power.a 00:02:33.081 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:33.081 [188/268] Linking static target lib/librte_reorder.a 00:02:33.345 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:33.345 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:33.345 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:33.345 [192/268] Linking static target lib/librte_security.a 00:02:33.345 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:33.605 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.605 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:33.864 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.122 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.122 [198/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.381 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:34.381 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:34.381 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:34.640 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:34.640 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:34.640 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:34.898 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:34.898 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:35.156 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:35.156 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:35.156 [209/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:35.156 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:35.414 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:35.414 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:35.414 [213/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:35.414 [214/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:35.414 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:35.415 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:35.415 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:35.415 [218/268] Linking static target drivers/librte_bus_pci.a 00:02:35.415 [219/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:35.673 [220/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:35.673 [221/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:35.673 [222/268] Linking static target drivers/librte_bus_vdev.a 00:02:35.673 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:35.673 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:35.673 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:35.673 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:35.932 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.932 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.868 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:36.868 [230/268] Linking static target lib/librte_vhost.a 00:02:37.469 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.469 [232/268] Linking target lib/librte_eal.so.24.1 00:02:37.738 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:37.738 [234/268] Linking target lib/librte_pci.so.24.1 00:02:37.738 [235/268] Linking target lib/librte_timer.so.24.1 00:02:37.738 [236/268] Linking target lib/librte_ring.so.24.1 00:02:37.738 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:37.738 [238/268] Linking target lib/librte_meter.so.24.1 00:02:37.738 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:37.998 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:37.998 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:37.998 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:37.998 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:37.998 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:37.998 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:37.998 [246/268] Linking target lib/librte_rcu.so.24.1 00:02:37.998 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:37.998 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:37.998 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:38.257 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:38.257 [251/268] Linking target lib/librte_mbuf.so.24.1 00:02:38.257 [252/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.257 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:38.257 [254/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.257 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:38.257 [256/268] Linking target lib/librte_net.so.24.1 00:02:38.257 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:38.257 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:38.516 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:38.516 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:38.516 [261/268] Linking target lib/librte_hash.so.24.1 00:02:38.516 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:38.516 [263/268] Linking target lib/librte_security.so.24.1 00:02:38.516 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:38.775 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:38.775 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:38.775 [267/268] Linking target lib/librte_power.so.24.1 00:02:38.775 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:38.775 INFO: autodetecting backend as ninja 00:02:38.775 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:05.314 CC lib/ut/ut.o 00:03:05.314 CC lib/ut_mock/mock.o 00:03:05.314 CC lib/log/log.o 00:03:05.314 CC lib/log/log_flags.o 00:03:05.314 CC lib/log/log_deprecated.o 00:03:05.314 LIB libspdk_ut_mock.a 00:03:05.314 LIB libspdk_ut.a 00:03:05.314 LIB libspdk_log.a 00:03:05.314 SO libspdk_ut_mock.so.6.0 00:03:05.314 SO libspdk_ut.so.2.0 00:03:05.314 SO libspdk_log.so.7.1 00:03:05.314 SYMLINK libspdk_ut_mock.so 00:03:05.314 SYMLINK libspdk_ut.so 00:03:05.314 SYMLINK libspdk_log.so 00:03:05.314 CXX lib/trace_parser/trace.o 00:03:05.314 CC lib/ioat/ioat.o 00:03:05.314 CC lib/util/base64.o 00:03:05.314 CC lib/util/bit_array.o 00:03:05.314 CC lib/util/cpuset.o 00:03:05.314 CC lib/util/crc16.o 00:03:05.314 CC lib/util/crc32.o 00:03:05.314 CC lib/util/crc32c.o 00:03:05.314 CC lib/dma/dma.o 00:03:05.314 CC lib/vfio_user/host/vfio_user_pci.o 00:03:05.314 CC lib/util/crc32_ieee.o 00:03:05.314 CC lib/vfio_user/host/vfio_user.o 00:03:05.314 CC lib/util/crc64.o 00:03:05.314 CC lib/util/dif.o 00:03:05.314 LIB libspdk_dma.a 00:03:05.314 CC lib/util/fd.o 00:03:05.314 CC lib/util/fd_group.o 00:03:05.314 SO libspdk_dma.so.5.0 00:03:05.314 CC lib/util/file.o 00:03:05.314 SYMLINK libspdk_dma.so 00:03:05.314 CC lib/util/hexlify.o 00:03:05.314 LIB libspdk_ioat.a 00:03:05.314 CC lib/util/iov.o 00:03:05.314 SO libspdk_ioat.so.7.0 00:03:05.314 CC lib/util/math.o 00:03:05.314 LIB libspdk_vfio_user.a 00:03:05.314 CC lib/util/net.o 00:03:05.314 SYMLINK libspdk_ioat.so 00:03:05.314 CC lib/util/pipe.o 00:03:05.314 SO libspdk_vfio_user.so.5.0 00:03:05.314 CC lib/util/strerror_tls.o 00:03:05.314 CC lib/util/string.o 00:03:05.314 CC lib/util/uuid.o 00:03:05.314 SYMLINK libspdk_vfio_user.so 00:03:05.314 CC lib/util/xor.o 00:03:05.314 CC lib/util/zipf.o 00:03:05.314 CC lib/util/md5.o 00:03:05.573 LIB libspdk_util.a 00:03:05.831 SO libspdk_util.so.10.1 00:03:05.831 SYMLINK libspdk_util.so 00:03:05.831 LIB libspdk_trace_parser.a 00:03:05.831 SO libspdk_trace_parser.so.6.0 00:03:06.090 SYMLINK libspdk_trace_parser.so 00:03:06.090 CC lib/conf/conf.o 00:03:06.090 CC lib/env_dpdk/env.o 00:03:06.090 CC lib/vmd/vmd.o 00:03:06.090 CC lib/env_dpdk/memory.o 00:03:06.090 CC lib/env_dpdk/init.o 00:03:06.090 CC lib/env_dpdk/pci.o 00:03:06.090 CC lib/vmd/led.o 00:03:06.090 CC lib/idxd/idxd.o 00:03:06.090 CC lib/json/json_parse.o 00:03:06.090 CC lib/rdma_utils/rdma_utils.o 00:03:06.090 CC lib/env_dpdk/threads.o 00:03:06.350 LIB libspdk_conf.a 00:03:06.350 CC lib/json/json_util.o 00:03:06.350 SO libspdk_conf.so.6.0 00:03:06.350 LIB libspdk_rdma_utils.a 00:03:06.350 SYMLINK libspdk_conf.so 00:03:06.350 CC lib/json/json_write.o 00:03:06.350 SO libspdk_rdma_utils.so.1.0 00:03:06.350 CC lib/env_dpdk/pci_ioat.o 00:03:06.350 CC lib/env_dpdk/pci_virtio.o 00:03:06.350 SYMLINK libspdk_rdma_utils.so 00:03:06.350 CC lib/env_dpdk/pci_vmd.o 00:03:06.350 CC lib/env_dpdk/pci_idxd.o 00:03:06.608 CC lib/env_dpdk/pci_event.o 00:03:06.608 CC lib/idxd/idxd_user.o 00:03:06.608 CC lib/idxd/idxd_kernel.o 00:03:06.608 CC lib/env_dpdk/sigbus_handler.o 00:03:06.608 CC lib/env_dpdk/pci_dpdk.o 00:03:06.608 LIB libspdk_json.a 00:03:06.608 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:06.608 SO libspdk_json.so.6.0 00:03:06.608 LIB libspdk_vmd.a 00:03:06.885 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:06.885 SO libspdk_vmd.so.6.0 00:03:06.885 SYMLINK libspdk_json.so 00:03:06.885 LIB libspdk_idxd.a 00:03:06.885 SYMLINK libspdk_vmd.so 00:03:06.885 SO libspdk_idxd.so.12.1 00:03:06.885 CC lib/rdma_provider/common.o 00:03:06.885 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:06.885 SYMLINK libspdk_idxd.so 00:03:06.885 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:06.885 CC lib/jsonrpc/jsonrpc_server.o 00:03:06.885 CC lib/jsonrpc/jsonrpc_client.o 00:03:06.885 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:07.144 LIB libspdk_rdma_provider.a 00:03:07.144 SO libspdk_rdma_provider.so.7.0 00:03:07.144 SYMLINK libspdk_rdma_provider.so 00:03:07.403 LIB libspdk_jsonrpc.a 00:03:07.403 SO libspdk_jsonrpc.so.6.0 00:03:07.403 SYMLINK libspdk_jsonrpc.so 00:03:07.661 LIB libspdk_env_dpdk.a 00:03:07.661 SO libspdk_env_dpdk.so.15.1 00:03:07.661 CC lib/rpc/rpc.o 00:03:07.661 SYMLINK libspdk_env_dpdk.so 00:03:07.920 LIB libspdk_rpc.a 00:03:07.920 SO libspdk_rpc.so.6.0 00:03:07.920 SYMLINK libspdk_rpc.so 00:03:08.179 CC lib/notify/notify.o 00:03:08.179 CC lib/notify/notify_rpc.o 00:03:08.179 CC lib/keyring/keyring.o 00:03:08.179 CC lib/keyring/keyring_rpc.o 00:03:08.179 CC lib/trace/trace.o 00:03:08.179 CC lib/trace/trace_rpc.o 00:03:08.179 CC lib/trace/trace_flags.o 00:03:08.437 LIB libspdk_notify.a 00:03:08.437 SO libspdk_notify.so.6.0 00:03:08.437 LIB libspdk_keyring.a 00:03:08.437 SYMLINK libspdk_notify.so 00:03:08.695 SO libspdk_keyring.so.2.0 00:03:08.695 LIB libspdk_trace.a 00:03:08.695 SO libspdk_trace.so.11.0 00:03:08.695 SYMLINK libspdk_keyring.so 00:03:08.695 SYMLINK libspdk_trace.so 00:03:08.954 CC lib/sock/sock.o 00:03:08.954 CC lib/sock/sock_rpc.o 00:03:08.954 CC lib/thread/thread.o 00:03:08.954 CC lib/thread/iobuf.o 00:03:09.521 LIB libspdk_sock.a 00:03:09.521 SO libspdk_sock.so.10.0 00:03:09.521 SYMLINK libspdk_sock.so 00:03:09.780 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:09.780 CC lib/nvme/nvme_fabric.o 00:03:09.780 CC lib/nvme/nvme_ctrlr.o 00:03:09.780 CC lib/nvme/nvme_ns_cmd.o 00:03:09.780 CC lib/nvme/nvme_pcie.o 00:03:09.780 CC lib/nvme/nvme_ns.o 00:03:09.780 CC lib/nvme/nvme_pcie_common.o 00:03:09.780 CC lib/nvme/nvme.o 00:03:09.780 CC lib/nvme/nvme_qpair.o 00:03:10.715 LIB libspdk_thread.a 00:03:10.715 CC lib/nvme/nvme_quirks.o 00:03:10.715 SO libspdk_thread.so.11.0 00:03:10.715 SYMLINK libspdk_thread.so 00:03:10.715 CC lib/nvme/nvme_transport.o 00:03:10.715 CC lib/nvme/nvme_discovery.o 00:03:10.715 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:10.973 CC lib/accel/accel.o 00:03:10.973 CC lib/accel/accel_rpc.o 00:03:10.973 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:10.973 CC lib/nvme/nvme_tcp.o 00:03:11.232 CC lib/accel/accel_sw.o 00:03:11.232 CC lib/nvme/nvme_opal.o 00:03:11.232 CC lib/nvme/nvme_io_msg.o 00:03:11.491 CC lib/nvme/nvme_poll_group.o 00:03:11.491 CC lib/nvme/nvme_zns.o 00:03:11.491 CC lib/nvme/nvme_stubs.o 00:03:11.753 CC lib/nvme/nvme_auth.o 00:03:11.753 CC lib/blob/blobstore.o 00:03:11.753 CC lib/init/json_config.o 00:03:12.011 CC lib/nvme/nvme_cuse.o 00:03:12.011 LIB libspdk_accel.a 00:03:12.011 CC lib/init/subsystem.o 00:03:12.011 SO libspdk_accel.so.16.0 00:03:12.011 CC lib/nvme/nvme_rdma.o 00:03:12.011 CC lib/blob/request.o 00:03:12.270 SYMLINK libspdk_accel.so 00:03:12.270 CC lib/blob/zeroes.o 00:03:12.270 CC lib/blob/blob_bs_dev.o 00:03:12.270 CC lib/init/subsystem_rpc.o 00:03:12.270 CC lib/init/rpc.o 00:03:12.528 LIB libspdk_init.a 00:03:12.528 SO libspdk_init.so.6.0 00:03:12.528 CC lib/fsdev/fsdev.o 00:03:12.528 CC lib/virtio/virtio.o 00:03:12.528 CC lib/fsdev/fsdev_io.o 00:03:12.528 CC lib/bdev/bdev.o 00:03:12.528 SYMLINK libspdk_init.so 00:03:12.528 CC lib/bdev/bdev_rpc.o 00:03:12.787 CC lib/bdev/bdev_zone.o 00:03:12.787 CC lib/bdev/part.o 00:03:12.787 CC lib/bdev/scsi_nvme.o 00:03:13.045 CC lib/virtio/virtio_vhost_user.o 00:03:13.045 CC lib/fsdev/fsdev_rpc.o 00:03:13.045 CC lib/virtio/virtio_vfio_user.o 00:03:13.045 CC lib/virtio/virtio_pci.o 00:03:13.045 CC lib/event/app.o 00:03:13.045 CC lib/event/reactor.o 00:03:13.045 CC lib/event/log_rpc.o 00:03:13.303 LIB libspdk_fsdev.a 00:03:13.303 CC lib/event/app_rpc.o 00:03:13.303 CC lib/event/scheduler_static.o 00:03:13.303 SO libspdk_fsdev.so.2.0 00:03:13.303 LIB libspdk_virtio.a 00:03:13.303 SYMLINK libspdk_fsdev.so 00:03:13.303 SO libspdk_virtio.so.7.0 00:03:13.561 LIB libspdk_nvme.a 00:03:13.561 SYMLINK libspdk_virtio.so 00:03:13.561 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:13.561 LIB libspdk_event.a 00:03:13.818 SO libspdk_nvme.so.15.0 00:03:13.818 SO libspdk_event.so.14.0 00:03:13.818 SYMLINK libspdk_event.so 00:03:14.076 SYMLINK libspdk_nvme.so 00:03:14.335 LIB libspdk_fuse_dispatcher.a 00:03:14.335 SO libspdk_fuse_dispatcher.so.1.0 00:03:14.335 SYMLINK libspdk_fuse_dispatcher.so 00:03:14.904 LIB libspdk_blob.a 00:03:15.163 SO libspdk_blob.so.11.0 00:03:15.163 SYMLINK libspdk_blob.so 00:03:15.421 LIB libspdk_bdev.a 00:03:15.421 CC lib/blobfs/tree.o 00:03:15.421 CC lib/blobfs/blobfs.o 00:03:15.421 CC lib/lvol/lvol.o 00:03:15.421 SO libspdk_bdev.so.17.0 00:03:15.679 SYMLINK libspdk_bdev.so 00:03:15.937 CC lib/scsi/dev.o 00:03:15.937 CC lib/scsi/lun.o 00:03:15.937 CC lib/scsi/port.o 00:03:15.937 CC lib/scsi/scsi.o 00:03:15.937 CC lib/ublk/ublk.o 00:03:15.937 CC lib/nbd/nbd.o 00:03:15.937 CC lib/nvmf/ctrlr.o 00:03:15.937 CC lib/ftl/ftl_core.o 00:03:15.937 CC lib/ftl/ftl_init.o 00:03:16.195 CC lib/ftl/ftl_layout.o 00:03:16.195 CC lib/ftl/ftl_debug.o 00:03:16.195 CC lib/scsi/scsi_bdev.o 00:03:16.196 CC lib/scsi/scsi_pr.o 00:03:16.196 CC lib/scsi/scsi_rpc.o 00:03:16.453 CC lib/nbd/nbd_rpc.o 00:03:16.453 LIB libspdk_blobfs.a 00:03:16.453 SO libspdk_blobfs.so.10.0 00:03:16.453 CC lib/scsi/task.o 00:03:16.453 LIB libspdk_lvol.a 00:03:16.453 SYMLINK libspdk_blobfs.so 00:03:16.453 CC lib/ublk/ublk_rpc.o 00:03:16.453 CC lib/ftl/ftl_io.o 00:03:16.453 CC lib/ftl/ftl_sb.o 00:03:16.453 SO libspdk_lvol.so.10.0 00:03:16.453 LIB libspdk_nbd.a 00:03:16.453 SYMLINK libspdk_lvol.so 00:03:16.453 CC lib/ftl/ftl_l2p.o 00:03:16.453 SO libspdk_nbd.so.7.0 00:03:16.453 CC lib/ftl/ftl_l2p_flat.o 00:03:16.711 CC lib/nvmf/ctrlr_discovery.o 00:03:16.711 SYMLINK libspdk_nbd.so 00:03:16.711 CC lib/nvmf/ctrlr_bdev.o 00:03:16.711 CC lib/nvmf/subsystem.o 00:03:16.711 LIB libspdk_ublk.a 00:03:16.711 LIB libspdk_scsi.a 00:03:16.711 SO libspdk_ublk.so.3.0 00:03:16.711 CC lib/ftl/ftl_nv_cache.o 00:03:16.711 SO libspdk_scsi.so.9.0 00:03:16.711 SYMLINK libspdk_ublk.so 00:03:16.711 CC lib/ftl/ftl_band.o 00:03:16.711 CC lib/nvmf/nvmf.o 00:03:16.711 CC lib/ftl/ftl_band_ops.o 00:03:16.711 SYMLINK libspdk_scsi.so 00:03:16.711 CC lib/ftl/ftl_writer.o 00:03:16.711 CC lib/ftl/ftl_rq.o 00:03:17.277 CC lib/nvmf/nvmf_rpc.o 00:03:17.277 CC lib/ftl/ftl_reloc.o 00:03:17.277 CC lib/ftl/ftl_l2p_cache.o 00:03:17.277 CC lib/iscsi/conn.o 00:03:17.277 CC lib/vhost/vhost.o 00:03:17.277 CC lib/vhost/vhost_rpc.o 00:03:17.536 CC lib/ftl/ftl_p2l.o 00:03:17.794 CC lib/ftl/ftl_p2l_log.o 00:03:17.794 CC lib/nvmf/transport.o 00:03:17.794 CC lib/nvmf/tcp.o 00:03:17.794 CC lib/nvmf/stubs.o 00:03:17.794 CC lib/iscsi/init_grp.o 00:03:17.794 CC lib/vhost/vhost_scsi.o 00:03:18.053 CC lib/vhost/vhost_blk.o 00:03:18.053 CC lib/ftl/mngt/ftl_mngt.o 00:03:18.053 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:18.053 CC lib/iscsi/iscsi.o 00:03:18.053 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:18.312 CC lib/iscsi/param.o 00:03:18.312 CC lib/nvmf/mdns_server.o 00:03:18.312 CC lib/nvmf/rdma.o 00:03:18.312 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:18.312 CC lib/vhost/rte_vhost_user.o 00:03:18.312 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:18.584 CC lib/nvmf/auth.o 00:03:18.584 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:18.852 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:18.852 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:18.852 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:18.852 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:19.111 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:19.111 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:19.111 CC lib/iscsi/portal_grp.o 00:03:19.111 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:19.111 CC lib/ftl/utils/ftl_conf.o 00:03:19.369 CC lib/ftl/utils/ftl_md.o 00:03:19.369 CC lib/ftl/utils/ftl_mempool.o 00:03:19.369 CC lib/iscsi/tgt_node.o 00:03:19.369 LIB libspdk_vhost.a 00:03:19.369 CC lib/ftl/utils/ftl_bitmap.o 00:03:19.369 CC lib/iscsi/iscsi_subsystem.o 00:03:19.369 CC lib/ftl/utils/ftl_property.o 00:03:19.627 CC lib/iscsi/iscsi_rpc.o 00:03:19.627 CC lib/iscsi/task.o 00:03:19.627 SO libspdk_vhost.so.8.0 00:03:19.627 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:19.627 SYMLINK libspdk_vhost.so 00:03:19.627 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:19.627 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:19.886 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:19.886 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:19.886 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:19.886 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:19.886 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:19.886 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:19.886 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:19.886 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:19.886 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:20.144 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:20.144 CC lib/ftl/base/ftl_base_dev.o 00:03:20.144 CC lib/ftl/base/ftl_base_bdev.o 00:03:20.144 CC lib/ftl/ftl_trace.o 00:03:20.144 LIB libspdk_iscsi.a 00:03:20.144 SO libspdk_iscsi.so.8.0 00:03:20.402 SYMLINK libspdk_iscsi.so 00:03:20.402 LIB libspdk_ftl.a 00:03:20.402 LIB libspdk_nvmf.a 00:03:20.660 SO libspdk_ftl.so.9.0 00:03:20.660 SO libspdk_nvmf.so.20.0 00:03:20.919 SYMLINK libspdk_nvmf.so 00:03:20.919 SYMLINK libspdk_ftl.so 00:03:21.177 CC module/env_dpdk/env_dpdk_rpc.o 00:03:21.435 CC module/accel/error/accel_error.o 00:03:21.435 CC module/accel/ioat/accel_ioat.o 00:03:21.435 CC module/accel/iaa/accel_iaa.o 00:03:21.435 CC module/blob/bdev/blob_bdev.o 00:03:21.435 CC module/accel/dsa/accel_dsa.o 00:03:21.435 CC module/keyring/file/keyring.o 00:03:21.435 CC module/sock/posix/posix.o 00:03:21.435 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:21.435 CC module/fsdev/aio/fsdev_aio.o 00:03:21.435 LIB libspdk_env_dpdk_rpc.a 00:03:21.435 SO libspdk_env_dpdk_rpc.so.6.0 00:03:21.435 SYMLINK libspdk_env_dpdk_rpc.so 00:03:21.435 CC module/keyring/file/keyring_rpc.o 00:03:21.436 CC module/accel/iaa/accel_iaa_rpc.o 00:03:21.436 CC module/accel/ioat/accel_ioat_rpc.o 00:03:21.695 CC module/accel/error/accel_error_rpc.o 00:03:21.695 LIB libspdk_scheduler_dynamic.a 00:03:21.695 SO libspdk_scheduler_dynamic.so.4.0 00:03:21.695 LIB libspdk_keyring_file.a 00:03:21.695 LIB libspdk_accel_iaa.a 00:03:21.695 LIB libspdk_blob_bdev.a 00:03:21.695 CC module/accel/dsa/accel_dsa_rpc.o 00:03:21.695 SO libspdk_keyring_file.so.2.0 00:03:21.695 SYMLINK libspdk_scheduler_dynamic.so 00:03:21.695 SO libspdk_accel_iaa.so.3.0 00:03:21.695 LIB libspdk_accel_ioat.a 00:03:21.695 SO libspdk_blob_bdev.so.11.0 00:03:21.695 LIB libspdk_accel_error.a 00:03:21.695 SO libspdk_accel_ioat.so.6.0 00:03:21.695 SYMLINK libspdk_keyring_file.so 00:03:21.695 CC module/keyring/linux/keyring.o 00:03:21.695 SYMLINK libspdk_accel_iaa.so 00:03:21.695 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:21.695 SO libspdk_accel_error.so.2.0 00:03:21.695 SYMLINK libspdk_blob_bdev.so 00:03:21.695 CC module/keyring/linux/keyring_rpc.o 00:03:21.953 LIB libspdk_accel_dsa.a 00:03:21.953 SYMLINK libspdk_accel_error.so 00:03:21.953 SYMLINK libspdk_accel_ioat.so 00:03:21.953 CC module/fsdev/aio/linux_aio_mgr.o 00:03:21.953 SO libspdk_accel_dsa.so.5.0 00:03:21.953 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:21.953 SYMLINK libspdk_accel_dsa.so 00:03:21.953 LIB libspdk_keyring_linux.a 00:03:21.953 SO libspdk_keyring_linux.so.1.0 00:03:21.953 CC module/scheduler/gscheduler/gscheduler.o 00:03:21.953 CC module/sock/uring/uring.o 00:03:22.211 SYMLINK libspdk_keyring_linux.so 00:03:22.211 LIB libspdk_fsdev_aio.a 00:03:22.211 LIB libspdk_scheduler_dpdk_governor.a 00:03:22.211 LIB libspdk_sock_posix.a 00:03:22.211 SO libspdk_fsdev_aio.so.1.0 00:03:22.211 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:22.211 SO libspdk_sock_posix.so.6.0 00:03:22.211 LIB libspdk_scheduler_gscheduler.a 00:03:22.211 SO libspdk_scheduler_gscheduler.so.4.0 00:03:22.211 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:22.211 SYMLINK libspdk_fsdev_aio.so 00:03:22.211 CC module/bdev/delay/vbdev_delay.o 00:03:22.211 CC module/bdev/gpt/gpt.o 00:03:22.211 CC module/blobfs/bdev/blobfs_bdev.o 00:03:22.211 CC module/bdev/error/vbdev_error.o 00:03:22.211 SYMLINK libspdk_sock_posix.so 00:03:22.211 SYMLINK libspdk_scheduler_gscheduler.so 00:03:22.211 CC module/bdev/lvol/vbdev_lvol.o 00:03:22.211 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:22.211 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:22.469 CC module/bdev/null/bdev_null.o 00:03:22.469 CC module/bdev/malloc/bdev_malloc.o 00:03:22.469 CC module/bdev/null/bdev_null_rpc.o 00:03:22.469 CC module/bdev/gpt/vbdev_gpt.o 00:03:22.469 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:22.469 LIB libspdk_blobfs_bdev.a 00:03:22.469 SO libspdk_blobfs_bdev.so.6.0 00:03:22.469 CC module/bdev/error/vbdev_error_rpc.o 00:03:22.727 SYMLINK libspdk_blobfs_bdev.so 00:03:22.727 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:22.727 LIB libspdk_bdev_delay.a 00:03:22.727 LIB libspdk_bdev_null.a 00:03:22.727 SO libspdk_bdev_delay.so.6.0 00:03:22.727 SO libspdk_bdev_null.so.6.0 00:03:22.727 LIB libspdk_bdev_gpt.a 00:03:22.727 LIB libspdk_bdev_error.a 00:03:22.727 SYMLINK libspdk_bdev_delay.so 00:03:22.727 SO libspdk_bdev_gpt.so.6.0 00:03:22.727 LIB libspdk_sock_uring.a 00:03:22.727 SO libspdk_bdev_error.so.6.0 00:03:22.727 SYMLINK libspdk_bdev_null.so 00:03:22.727 LIB libspdk_bdev_malloc.a 00:03:22.727 CC module/bdev/nvme/bdev_nvme.o 00:03:22.727 CC module/bdev/passthru/vbdev_passthru.o 00:03:22.727 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:22.985 SO libspdk_sock_uring.so.5.0 00:03:22.985 SO libspdk_bdev_malloc.so.6.0 00:03:22.985 SYMLINK libspdk_bdev_gpt.so 00:03:22.985 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:22.985 SYMLINK libspdk_bdev_error.so 00:03:22.985 CC module/bdev/nvme/nvme_rpc.o 00:03:22.985 SYMLINK libspdk_bdev_malloc.so 00:03:22.985 SYMLINK libspdk_sock_uring.so 00:03:22.985 CC module/bdev/nvme/bdev_mdns_client.o 00:03:22.985 CC module/bdev/nvme/vbdev_opal.o 00:03:22.985 CC module/bdev/raid/bdev_raid.o 00:03:22.985 LIB libspdk_bdev_lvol.a 00:03:22.985 CC module/bdev/split/vbdev_split.o 00:03:22.985 SO libspdk_bdev_lvol.so.6.0 00:03:22.985 CC module/bdev/split/vbdev_split_rpc.o 00:03:23.243 SYMLINK libspdk_bdev_lvol.so 00:03:23.243 LIB libspdk_bdev_passthru.a 00:03:23.243 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:23.243 SO libspdk_bdev_passthru.so.6.0 00:03:23.243 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:23.243 LIB libspdk_bdev_split.a 00:03:23.243 SYMLINK libspdk_bdev_passthru.so 00:03:23.243 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:23.243 CC module/bdev/uring/bdev_uring.o 00:03:23.243 SO libspdk_bdev_split.so.6.0 00:03:23.501 SYMLINK libspdk_bdev_split.so 00:03:23.501 CC module/bdev/raid/bdev_raid_rpc.o 00:03:23.501 CC module/bdev/aio/bdev_aio.o 00:03:23.501 CC module/bdev/raid/bdev_raid_sb.o 00:03:23.501 CC module/bdev/ftl/bdev_ftl.o 00:03:23.501 CC module/bdev/iscsi/bdev_iscsi.o 00:03:23.760 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:23.760 CC module/bdev/raid/raid0.o 00:03:23.760 CC module/bdev/uring/bdev_uring_rpc.o 00:03:23.760 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:23.760 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:23.760 CC module/bdev/aio/bdev_aio_rpc.o 00:03:23.760 LIB libspdk_bdev_zone_block.a 00:03:23.760 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:23.760 SO libspdk_bdev_zone_block.so.6.0 00:03:24.019 LIB libspdk_bdev_uring.a 00:03:24.019 SO libspdk_bdev_uring.so.6.0 00:03:24.019 SYMLINK libspdk_bdev_zone_block.so 00:03:24.019 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:24.019 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:24.019 LIB libspdk_bdev_aio.a 00:03:24.019 CC module/bdev/raid/raid1.o 00:03:24.019 CC module/bdev/raid/concat.o 00:03:24.019 SYMLINK libspdk_bdev_uring.so 00:03:24.019 LIB libspdk_bdev_ftl.a 00:03:24.019 SO libspdk_bdev_aio.so.6.0 00:03:24.019 LIB libspdk_bdev_iscsi.a 00:03:24.019 SO libspdk_bdev_ftl.so.6.0 00:03:24.019 SYMLINK libspdk_bdev_aio.so 00:03:24.019 SO libspdk_bdev_iscsi.so.6.0 00:03:24.277 SYMLINK libspdk_bdev_ftl.so 00:03:24.277 SYMLINK libspdk_bdev_iscsi.so 00:03:24.277 LIB libspdk_bdev_virtio.a 00:03:24.277 SO libspdk_bdev_virtio.so.6.0 00:03:24.277 LIB libspdk_bdev_raid.a 00:03:24.277 SYMLINK libspdk_bdev_virtio.so 00:03:24.277 SO libspdk_bdev_raid.so.6.0 00:03:24.536 SYMLINK libspdk_bdev_raid.so 00:03:25.912 LIB libspdk_bdev_nvme.a 00:03:25.912 SO libspdk_bdev_nvme.so.7.1 00:03:25.912 SYMLINK libspdk_bdev_nvme.so 00:03:26.197 CC module/event/subsystems/keyring/keyring.o 00:03:26.197 CC module/event/subsystems/sock/sock.o 00:03:26.197 CC module/event/subsystems/iobuf/iobuf.o 00:03:26.197 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:26.197 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:26.197 CC module/event/subsystems/vmd/vmd.o 00:03:26.197 CC module/event/subsystems/scheduler/scheduler.o 00:03:26.197 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:26.197 CC module/event/subsystems/fsdev/fsdev.o 00:03:26.475 LIB libspdk_event_keyring.a 00:03:26.475 LIB libspdk_event_scheduler.a 00:03:26.475 LIB libspdk_event_sock.a 00:03:26.475 SO libspdk_event_keyring.so.1.0 00:03:26.475 LIB libspdk_event_vmd.a 00:03:26.475 LIB libspdk_event_vhost_blk.a 00:03:26.475 LIB libspdk_event_iobuf.a 00:03:26.475 SO libspdk_event_scheduler.so.4.0 00:03:26.475 LIB libspdk_event_fsdev.a 00:03:26.475 SO libspdk_event_sock.so.5.0 00:03:26.475 SO libspdk_event_vmd.so.6.0 00:03:26.475 SO libspdk_event_vhost_blk.so.3.0 00:03:26.475 SO libspdk_event_fsdev.so.1.0 00:03:26.475 SO libspdk_event_iobuf.so.3.0 00:03:26.475 SYMLINK libspdk_event_scheduler.so 00:03:26.475 SYMLINK libspdk_event_keyring.so 00:03:26.475 SYMLINK libspdk_event_sock.so 00:03:26.475 SYMLINK libspdk_event_vhost_blk.so 00:03:26.475 SYMLINK libspdk_event_fsdev.so 00:03:26.475 SYMLINK libspdk_event_iobuf.so 00:03:26.475 SYMLINK libspdk_event_vmd.so 00:03:26.733 CC module/event/subsystems/accel/accel.o 00:03:26.992 LIB libspdk_event_accel.a 00:03:26.992 SO libspdk_event_accel.so.6.0 00:03:26.992 SYMLINK libspdk_event_accel.so 00:03:27.558 CC module/event/subsystems/bdev/bdev.o 00:03:27.558 LIB libspdk_event_bdev.a 00:03:27.558 SO libspdk_event_bdev.so.6.0 00:03:27.816 SYMLINK libspdk_event_bdev.so 00:03:27.816 CC module/event/subsystems/scsi/scsi.o 00:03:27.816 CC module/event/subsystems/ublk/ublk.o 00:03:27.816 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:27.816 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:27.816 CC module/event/subsystems/nbd/nbd.o 00:03:28.075 LIB libspdk_event_ublk.a 00:03:28.075 SO libspdk_event_ublk.so.3.0 00:03:28.075 LIB libspdk_event_scsi.a 00:03:28.075 LIB libspdk_event_nbd.a 00:03:28.075 SO libspdk_event_scsi.so.6.0 00:03:28.075 SO libspdk_event_nbd.so.6.0 00:03:28.075 SYMLINK libspdk_event_ublk.so 00:03:28.075 SYMLINK libspdk_event_scsi.so 00:03:28.333 LIB libspdk_event_nvmf.a 00:03:28.333 SYMLINK libspdk_event_nbd.so 00:03:28.333 SO libspdk_event_nvmf.so.6.0 00:03:28.333 SYMLINK libspdk_event_nvmf.so 00:03:28.333 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:28.333 CC module/event/subsystems/iscsi/iscsi.o 00:03:28.592 LIB libspdk_event_vhost_scsi.a 00:03:28.592 LIB libspdk_event_iscsi.a 00:03:28.592 SO libspdk_event_vhost_scsi.so.3.0 00:03:28.592 SO libspdk_event_iscsi.so.6.0 00:03:28.850 SYMLINK libspdk_event_vhost_scsi.so 00:03:28.850 SYMLINK libspdk_event_iscsi.so 00:03:28.850 SO libspdk.so.6.0 00:03:28.850 SYMLINK libspdk.so 00:03:29.108 CC app/trace_record/trace_record.o 00:03:29.108 CXX app/trace/trace.o 00:03:29.108 CC app/spdk_lspci/spdk_lspci.o 00:03:29.108 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:29.108 CC app/iscsi_tgt/iscsi_tgt.o 00:03:29.366 CC app/spdk_tgt/spdk_tgt.o 00:03:29.366 CC app/nvmf_tgt/nvmf_main.o 00:03:29.366 CC examples/util/zipf/zipf.o 00:03:29.366 CC examples/ioat/perf/perf.o 00:03:29.366 CC test/thread/poller_perf/poller_perf.o 00:03:29.366 LINK spdk_lspci 00:03:29.366 LINK iscsi_tgt 00:03:29.366 LINK interrupt_tgt 00:03:29.366 LINK zipf 00:03:29.366 LINK spdk_tgt 00:03:29.366 LINK spdk_trace_record 00:03:29.625 LINK poller_perf 00:03:29.625 LINK nvmf_tgt 00:03:29.625 LINK ioat_perf 00:03:29.625 LINK spdk_trace 00:03:29.625 CC app/spdk_nvme_perf/perf.o 00:03:29.625 CC app/spdk_nvme_identify/identify.o 00:03:29.625 CC app/spdk_nvme_discover/discovery_aer.o 00:03:29.884 CC app/spdk_top/spdk_top.o 00:03:29.884 CC examples/ioat/verify/verify.o 00:03:29.884 CC app/spdk_dd/spdk_dd.o 00:03:29.884 CC test/dma/test_dma/test_dma.o 00:03:29.884 CC app/fio/nvme/fio_plugin.o 00:03:29.884 TEST_HEADER include/spdk/accel.h 00:03:29.884 TEST_HEADER include/spdk/accel_module.h 00:03:29.884 TEST_HEADER include/spdk/assert.h 00:03:29.884 TEST_HEADER include/spdk/barrier.h 00:03:29.884 TEST_HEADER include/spdk/base64.h 00:03:29.884 TEST_HEADER include/spdk/bdev.h 00:03:29.884 TEST_HEADER include/spdk/bdev_module.h 00:03:29.884 LINK spdk_nvme_discover 00:03:29.884 TEST_HEADER include/spdk/bdev_zone.h 00:03:29.884 TEST_HEADER include/spdk/bit_array.h 00:03:29.884 TEST_HEADER include/spdk/bit_pool.h 00:03:29.884 CC test/app/bdev_svc/bdev_svc.o 00:03:29.884 TEST_HEADER include/spdk/blob_bdev.h 00:03:29.884 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:29.884 TEST_HEADER include/spdk/blobfs.h 00:03:29.884 TEST_HEADER include/spdk/blob.h 00:03:29.884 TEST_HEADER include/spdk/conf.h 00:03:29.884 TEST_HEADER include/spdk/config.h 00:03:29.884 TEST_HEADER include/spdk/cpuset.h 00:03:29.884 TEST_HEADER include/spdk/crc16.h 00:03:29.884 TEST_HEADER include/spdk/crc32.h 00:03:29.884 TEST_HEADER include/spdk/crc64.h 00:03:29.884 TEST_HEADER include/spdk/dif.h 00:03:29.884 TEST_HEADER include/spdk/dma.h 00:03:29.884 TEST_HEADER include/spdk/endian.h 00:03:29.884 TEST_HEADER include/spdk/env_dpdk.h 00:03:29.884 TEST_HEADER include/spdk/env.h 00:03:29.884 TEST_HEADER include/spdk/event.h 00:03:29.884 TEST_HEADER include/spdk/fd_group.h 00:03:29.884 TEST_HEADER include/spdk/fd.h 00:03:30.143 TEST_HEADER include/spdk/file.h 00:03:30.143 TEST_HEADER include/spdk/fsdev.h 00:03:30.143 TEST_HEADER include/spdk/fsdev_module.h 00:03:30.143 TEST_HEADER include/spdk/ftl.h 00:03:30.143 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:30.143 TEST_HEADER include/spdk/gpt_spec.h 00:03:30.143 TEST_HEADER include/spdk/hexlify.h 00:03:30.143 TEST_HEADER include/spdk/histogram_data.h 00:03:30.143 TEST_HEADER include/spdk/idxd.h 00:03:30.143 TEST_HEADER include/spdk/idxd_spec.h 00:03:30.143 TEST_HEADER include/spdk/init.h 00:03:30.143 TEST_HEADER include/spdk/ioat.h 00:03:30.143 TEST_HEADER include/spdk/ioat_spec.h 00:03:30.143 TEST_HEADER include/spdk/iscsi_spec.h 00:03:30.143 TEST_HEADER include/spdk/json.h 00:03:30.143 LINK verify 00:03:30.143 TEST_HEADER include/spdk/jsonrpc.h 00:03:30.143 TEST_HEADER include/spdk/keyring.h 00:03:30.143 TEST_HEADER include/spdk/keyring_module.h 00:03:30.143 TEST_HEADER include/spdk/likely.h 00:03:30.143 TEST_HEADER include/spdk/log.h 00:03:30.143 TEST_HEADER include/spdk/lvol.h 00:03:30.143 TEST_HEADER include/spdk/md5.h 00:03:30.143 TEST_HEADER include/spdk/memory.h 00:03:30.143 TEST_HEADER include/spdk/mmio.h 00:03:30.143 TEST_HEADER include/spdk/nbd.h 00:03:30.143 TEST_HEADER include/spdk/net.h 00:03:30.143 TEST_HEADER include/spdk/notify.h 00:03:30.143 TEST_HEADER include/spdk/nvme.h 00:03:30.143 TEST_HEADER include/spdk/nvme_intel.h 00:03:30.143 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:30.143 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:30.143 TEST_HEADER include/spdk/nvme_spec.h 00:03:30.143 TEST_HEADER include/spdk/nvme_zns.h 00:03:30.143 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:30.143 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:30.143 TEST_HEADER include/spdk/nvmf.h 00:03:30.143 TEST_HEADER include/spdk/nvmf_spec.h 00:03:30.143 TEST_HEADER include/spdk/nvmf_transport.h 00:03:30.143 TEST_HEADER include/spdk/opal.h 00:03:30.144 TEST_HEADER include/spdk/opal_spec.h 00:03:30.144 TEST_HEADER include/spdk/pci_ids.h 00:03:30.144 TEST_HEADER include/spdk/pipe.h 00:03:30.144 TEST_HEADER include/spdk/queue.h 00:03:30.144 TEST_HEADER include/spdk/reduce.h 00:03:30.144 TEST_HEADER include/spdk/rpc.h 00:03:30.144 TEST_HEADER include/spdk/scheduler.h 00:03:30.144 TEST_HEADER include/spdk/scsi.h 00:03:30.144 TEST_HEADER include/spdk/scsi_spec.h 00:03:30.144 TEST_HEADER include/spdk/sock.h 00:03:30.144 TEST_HEADER include/spdk/stdinc.h 00:03:30.144 TEST_HEADER include/spdk/string.h 00:03:30.144 TEST_HEADER include/spdk/thread.h 00:03:30.144 TEST_HEADER include/spdk/trace.h 00:03:30.144 TEST_HEADER include/spdk/trace_parser.h 00:03:30.144 TEST_HEADER include/spdk/tree.h 00:03:30.144 TEST_HEADER include/spdk/ublk.h 00:03:30.144 TEST_HEADER include/spdk/util.h 00:03:30.144 LINK bdev_svc 00:03:30.144 TEST_HEADER include/spdk/uuid.h 00:03:30.144 TEST_HEADER include/spdk/version.h 00:03:30.144 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:30.144 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:30.144 TEST_HEADER include/spdk/vhost.h 00:03:30.144 TEST_HEADER include/spdk/vmd.h 00:03:30.144 TEST_HEADER include/spdk/xor.h 00:03:30.144 TEST_HEADER include/spdk/zipf.h 00:03:30.144 CXX test/cpp_headers/accel.o 00:03:30.402 CC app/fio/bdev/fio_plugin.o 00:03:30.402 LINK spdk_dd 00:03:30.402 CXX test/cpp_headers/accel_module.o 00:03:30.402 LINK test_dma 00:03:30.402 LINK spdk_nvme 00:03:30.402 CC examples/thread/thread/thread_ex.o 00:03:30.661 LINK spdk_nvme_identify 00:03:30.661 LINK spdk_nvme_perf 00:03:30.661 CXX test/cpp_headers/assert.o 00:03:30.661 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:30.661 CC test/app/histogram_perf/histogram_perf.o 00:03:30.661 LINK spdk_top 00:03:30.661 CXX test/cpp_headers/barrier.o 00:03:30.661 CC app/vhost/vhost.o 00:03:30.661 LINK thread 00:03:30.920 LINK histogram_perf 00:03:30.920 LINK spdk_bdev 00:03:30.920 CXX test/cpp_headers/base64.o 00:03:30.920 CXX test/cpp_headers/bdev.o 00:03:30.920 CC examples/sock/hello_world/hello_sock.o 00:03:30.920 LINK vhost 00:03:30.920 CXX test/cpp_headers/bdev_module.o 00:03:30.920 CC examples/vmd/lsvmd/lsvmd.o 00:03:30.920 CXX test/cpp_headers/bdev_zone.o 00:03:30.920 CC examples/idxd/perf/perf.o 00:03:31.178 LINK nvme_fuzz 00:03:31.178 LINK lsvmd 00:03:31.178 CC examples/accel/perf/accel_perf.o 00:03:31.178 CXX test/cpp_headers/bit_array.o 00:03:31.178 LINK hello_sock 00:03:31.178 CXX test/cpp_headers/bit_pool.o 00:03:31.178 CC examples/vmd/led/led.o 00:03:31.178 CC examples/blob/hello_world/hello_blob.o 00:03:31.178 CXX test/cpp_headers/blob_bdev.o 00:03:31.178 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:31.439 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:31.439 LINK idxd_perf 00:03:31.439 CXX test/cpp_headers/blobfs_bdev.o 00:03:31.439 LINK led 00:03:31.439 CC test/app/jsoncat/jsoncat.o 00:03:31.439 CXX test/cpp_headers/blobfs.o 00:03:31.439 LINK hello_blob 00:03:31.439 CC examples/nvme/hello_world/hello_world.o 00:03:31.697 CC examples/nvme/reconnect/reconnect.o 00:03:31.697 LINK hello_fsdev 00:03:31.697 LINK jsoncat 00:03:31.697 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:31.697 LINK accel_perf 00:03:31.697 CXX test/cpp_headers/blob.o 00:03:31.698 CC examples/nvme/arbitration/arbitration.o 00:03:31.698 CXX test/cpp_headers/conf.o 00:03:31.698 LINK hello_world 00:03:31.956 CC examples/blob/cli/blobcli.o 00:03:31.956 LINK reconnect 00:03:31.956 CXX test/cpp_headers/config.o 00:03:31.956 CXX test/cpp_headers/cpuset.o 00:03:31.956 CC test/nvme/aer/aer.o 00:03:31.956 CC test/event/event_perf/event_perf.o 00:03:31.956 LINK arbitration 00:03:31.956 CC test/nvme/reset/reset.o 00:03:32.213 CC test/env/mem_callbacks/mem_callbacks.o 00:03:32.213 LINK nvme_manage 00:03:32.213 CXX test/cpp_headers/crc16.o 00:03:32.213 LINK event_perf 00:03:32.471 CC examples/bdev/hello_world/hello_bdev.o 00:03:32.471 LINK aer 00:03:32.471 CC examples/bdev/bdevperf/bdevperf.o 00:03:32.471 LINK reset 00:03:32.471 CXX test/cpp_headers/crc32.o 00:03:32.471 LINK blobcli 00:03:32.471 CC examples/nvme/hotplug/hotplug.o 00:03:32.471 CC test/event/reactor/reactor.o 00:03:32.471 CXX test/cpp_headers/crc64.o 00:03:32.729 LINK hello_bdev 00:03:32.729 LINK reactor 00:03:32.729 CC test/event/reactor_perf/reactor_perf.o 00:03:32.729 CC test/nvme/sgl/sgl.o 00:03:32.729 LINK hotplug 00:03:32.729 CC test/nvme/e2edp/nvme_dp.o 00:03:32.729 LINK mem_callbacks 00:03:32.729 CXX test/cpp_headers/dif.o 00:03:32.729 LINK reactor_perf 00:03:32.988 CC test/nvme/overhead/overhead.o 00:03:32.988 CXX test/cpp_headers/dma.o 00:03:32.988 CC test/event/app_repeat/app_repeat.o 00:03:32.988 CC test/env/vtophys/vtophys.o 00:03:32.988 LINK sgl 00:03:32.988 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:32.988 LINK nvme_dp 00:03:32.988 LINK iscsi_fuzz 00:03:32.988 CC test/event/scheduler/scheduler.o 00:03:32.988 LINK vtophys 00:03:32.988 CXX test/cpp_headers/endian.o 00:03:32.988 LINK app_repeat 00:03:33.246 LINK cmb_copy 00:03:33.246 LINK bdevperf 00:03:33.246 LINK overhead 00:03:33.246 CC test/nvme/err_injection/err_injection.o 00:03:33.246 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:33.246 CXX test/cpp_headers/env_dpdk.o 00:03:33.246 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:33.246 LINK scheduler 00:03:33.246 CC test/env/memory/memory_ut.o 00:03:33.504 CC test/nvme/startup/startup.o 00:03:33.504 LINK env_dpdk_post_init 00:03:33.504 LINK err_injection 00:03:33.504 CC examples/nvme/abort/abort.o 00:03:33.504 CC test/app/stub/stub.o 00:03:33.504 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:33.504 CXX test/cpp_headers/env.o 00:03:33.504 CC test/env/pci/pci_ut.o 00:03:33.504 LINK startup 00:03:33.504 CC test/rpc_client/rpc_client_test.o 00:03:33.762 CXX test/cpp_headers/event.o 00:03:33.762 LINK stub 00:03:33.762 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:33.762 CC test/accel/dif/dif.o 00:03:33.762 LINK rpc_client_test 00:03:33.762 LINK abort 00:03:33.762 CXX test/cpp_headers/fd_group.o 00:03:33.762 CC test/nvme/reserve/reserve.o 00:03:33.762 LINK vhost_fuzz 00:03:34.020 LINK pci_ut 00:03:34.020 LINK pmr_persistence 00:03:34.020 CXX test/cpp_headers/fd.o 00:03:34.020 LINK reserve 00:03:34.278 CC test/blobfs/mkfs/mkfs.o 00:03:34.278 CC test/nvme/connect_stress/connect_stress.o 00:03:34.278 CC test/nvme/simple_copy/simple_copy.o 00:03:34.278 CXX test/cpp_headers/file.o 00:03:34.278 CC test/nvme/boot_partition/boot_partition.o 00:03:34.278 CC test/lvol/esnap/esnap.o 00:03:34.278 LINK connect_stress 00:03:34.278 CXX test/cpp_headers/fsdev.o 00:03:34.278 LINK mkfs 00:03:34.278 CC test/nvme/compliance/nvme_compliance.o 00:03:34.278 CC examples/nvmf/nvmf/nvmf.o 00:03:34.536 LINK simple_copy 00:03:34.537 LINK boot_partition 00:03:34.537 LINK dif 00:03:34.537 CXX test/cpp_headers/fsdev_module.o 00:03:34.537 LINK memory_ut 00:03:34.537 CC test/nvme/fused_ordering/fused_ordering.o 00:03:34.796 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:34.796 CC test/nvme/fdp/fdp.o 00:03:34.796 LINK nvmf 00:03:34.796 CXX test/cpp_headers/ftl.o 00:03:34.796 LINK nvme_compliance 00:03:34.796 CC test/nvme/cuse/cuse.o 00:03:34.796 CXX test/cpp_headers/fuse_dispatcher.o 00:03:34.796 LINK fused_ordering 00:03:34.796 LINK doorbell_aers 00:03:35.086 CXX test/cpp_headers/gpt_spec.o 00:03:35.086 CC test/bdev/bdevio/bdevio.o 00:03:35.086 CXX test/cpp_headers/hexlify.o 00:03:35.086 CXX test/cpp_headers/histogram_data.o 00:03:35.086 CXX test/cpp_headers/idxd.o 00:03:35.086 CXX test/cpp_headers/idxd_spec.o 00:03:35.086 LINK fdp 00:03:35.086 CXX test/cpp_headers/init.o 00:03:35.086 CXX test/cpp_headers/ioat.o 00:03:35.086 CXX test/cpp_headers/ioat_spec.o 00:03:35.086 CXX test/cpp_headers/iscsi_spec.o 00:03:35.086 CXX test/cpp_headers/json.o 00:03:35.086 CXX test/cpp_headers/jsonrpc.o 00:03:35.086 CXX test/cpp_headers/keyring.o 00:03:35.344 CXX test/cpp_headers/keyring_module.o 00:03:35.344 CXX test/cpp_headers/likely.o 00:03:35.344 CXX test/cpp_headers/log.o 00:03:35.344 CXX test/cpp_headers/lvol.o 00:03:35.344 LINK bdevio 00:03:35.344 CXX test/cpp_headers/md5.o 00:03:35.344 CXX test/cpp_headers/memory.o 00:03:35.344 CXX test/cpp_headers/mmio.o 00:03:35.345 CXX test/cpp_headers/nbd.o 00:03:35.345 CXX test/cpp_headers/net.o 00:03:35.345 CXX test/cpp_headers/notify.o 00:03:35.603 CXX test/cpp_headers/nvme.o 00:03:35.603 CXX test/cpp_headers/nvme_intel.o 00:03:35.603 CXX test/cpp_headers/nvme_ocssd.o 00:03:35.603 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:35.603 CXX test/cpp_headers/nvme_spec.o 00:03:35.603 CXX test/cpp_headers/nvme_zns.o 00:03:35.603 CXX test/cpp_headers/nvmf_cmd.o 00:03:35.603 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:35.603 CXX test/cpp_headers/nvmf.o 00:03:35.603 CXX test/cpp_headers/nvmf_spec.o 00:03:35.862 CXX test/cpp_headers/nvmf_transport.o 00:03:35.862 CXX test/cpp_headers/opal.o 00:03:35.862 CXX test/cpp_headers/opal_spec.o 00:03:35.862 CXX test/cpp_headers/pci_ids.o 00:03:35.862 CXX test/cpp_headers/pipe.o 00:03:35.862 CXX test/cpp_headers/queue.o 00:03:35.862 CXX test/cpp_headers/reduce.o 00:03:35.862 CXX test/cpp_headers/rpc.o 00:03:35.862 CXX test/cpp_headers/scheduler.o 00:03:35.862 CXX test/cpp_headers/scsi.o 00:03:35.862 CXX test/cpp_headers/scsi_spec.o 00:03:35.862 CXX test/cpp_headers/sock.o 00:03:35.862 CXX test/cpp_headers/stdinc.o 00:03:36.120 CXX test/cpp_headers/string.o 00:03:36.120 CXX test/cpp_headers/thread.o 00:03:36.120 CXX test/cpp_headers/trace.o 00:03:36.120 CXX test/cpp_headers/trace_parser.o 00:03:36.120 LINK cuse 00:03:36.120 CXX test/cpp_headers/tree.o 00:03:36.120 CXX test/cpp_headers/ublk.o 00:03:36.120 CXX test/cpp_headers/util.o 00:03:36.120 CXX test/cpp_headers/uuid.o 00:03:36.120 CXX test/cpp_headers/version.o 00:03:36.120 CXX test/cpp_headers/vfio_user_pci.o 00:03:36.120 CXX test/cpp_headers/vfio_user_spec.o 00:03:36.120 CXX test/cpp_headers/vhost.o 00:03:36.378 CXX test/cpp_headers/vmd.o 00:03:36.378 CXX test/cpp_headers/xor.o 00:03:36.378 CXX test/cpp_headers/zipf.o 00:03:39.666 LINK esnap 00:03:39.925 00:03:39.925 real 1m33.004s 00:03:39.925 user 8m19.865s 00:03:39.925 sys 1m42.043s 00:03:39.925 08:16:27 make -- common/autotest_common.sh@1133 -- $ xtrace_disable 00:03:39.925 08:16:27 make -- common/autotest_common.sh@10 -- $ set +x 00:03:39.925 ************************************ 00:03:39.925 END TEST make 00:03:39.925 ************************************ 00:03:39.925 08:16:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:39.925 08:16:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:39.925 08:16:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:39.925 08:16:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.925 08:16:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:39.925 08:16:27 -- pm/common@44 -- $ pid=5253 00:03:39.925 08:16:27 -- pm/common@50 -- $ kill -TERM 5253 00:03:39.925 08:16:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.925 08:16:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:39.925 08:16:27 -- pm/common@44 -- $ pid=5255 00:03:39.925 08:16:27 -- pm/common@50 -- $ kill -TERM 5255 00:03:39.925 08:16:27 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:39.925 08:16:27 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:40.184 08:16:27 -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:03:40.184 08:16:27 -- common/autotest_common.sh@1638 -- # lcov --version 00:03:40.184 08:16:27 -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:03:40.184 08:16:27 -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:03:40.184 08:16:27 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:40.184 08:16:27 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:40.184 08:16:27 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:40.184 08:16:27 -- scripts/common.sh@336 -- # IFS=.-: 00:03:40.184 08:16:27 -- scripts/common.sh@336 -- # read -ra ver1 00:03:40.184 08:16:27 -- scripts/common.sh@337 -- # IFS=.-: 00:03:40.184 08:16:27 -- scripts/common.sh@337 -- # read -ra ver2 00:03:40.184 08:16:27 -- scripts/common.sh@338 -- # local 'op=<' 00:03:40.184 08:16:27 -- scripts/common.sh@340 -- # ver1_l=2 00:03:40.184 08:16:27 -- scripts/common.sh@341 -- # ver2_l=1 00:03:40.184 08:16:27 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:40.184 08:16:27 -- scripts/common.sh@344 -- # case "$op" in 00:03:40.184 08:16:27 -- scripts/common.sh@345 -- # : 1 00:03:40.184 08:16:27 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:40.184 08:16:27 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:40.184 08:16:27 -- scripts/common.sh@365 -- # decimal 1 00:03:40.184 08:16:27 -- scripts/common.sh@353 -- # local d=1 00:03:40.184 08:16:27 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:40.184 08:16:27 -- scripts/common.sh@355 -- # echo 1 00:03:40.184 08:16:27 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:40.184 08:16:27 -- scripts/common.sh@366 -- # decimal 2 00:03:40.184 08:16:27 -- scripts/common.sh@353 -- # local d=2 00:03:40.184 08:16:27 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:40.184 08:16:27 -- scripts/common.sh@355 -- # echo 2 00:03:40.184 08:16:27 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:40.184 08:16:27 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:40.184 08:16:27 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:40.184 08:16:27 -- scripts/common.sh@368 -- # return 0 00:03:40.184 08:16:27 -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:40.184 08:16:27 -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:03:40.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.184 --rc genhtml_branch_coverage=1 00:03:40.184 --rc genhtml_function_coverage=1 00:03:40.184 --rc genhtml_legend=1 00:03:40.184 --rc geninfo_all_blocks=1 00:03:40.184 --rc geninfo_unexecuted_blocks=1 00:03:40.184 00:03:40.184 ' 00:03:40.184 08:16:27 -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:03:40.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.184 --rc genhtml_branch_coverage=1 00:03:40.184 --rc genhtml_function_coverage=1 00:03:40.184 --rc genhtml_legend=1 00:03:40.184 --rc geninfo_all_blocks=1 00:03:40.184 --rc geninfo_unexecuted_blocks=1 00:03:40.184 00:03:40.184 ' 00:03:40.184 08:16:27 -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:03:40.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.184 --rc genhtml_branch_coverage=1 00:03:40.184 --rc genhtml_function_coverage=1 00:03:40.184 --rc genhtml_legend=1 00:03:40.184 --rc geninfo_all_blocks=1 00:03:40.184 --rc geninfo_unexecuted_blocks=1 00:03:40.184 00:03:40.184 ' 00:03:40.184 08:16:27 -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:03:40.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.184 --rc genhtml_branch_coverage=1 00:03:40.184 --rc genhtml_function_coverage=1 00:03:40.184 --rc genhtml_legend=1 00:03:40.184 --rc geninfo_all_blocks=1 00:03:40.184 --rc geninfo_unexecuted_blocks=1 00:03:40.184 00:03:40.184 ' 00:03:40.184 08:16:27 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:40.184 08:16:27 -- nvmf/common.sh@7 -- # uname -s 00:03:40.184 08:16:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:40.184 08:16:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:40.184 08:16:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:40.184 08:16:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:40.184 08:16:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:40.184 08:16:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:40.184 08:16:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:40.184 08:16:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:40.184 08:16:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:40.184 08:16:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:40.184 08:16:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:03:40.184 08:16:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:03:40.184 08:16:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:40.184 08:16:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:40.184 08:16:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:40.184 08:16:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:40.184 08:16:27 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:40.184 08:16:27 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:40.184 08:16:27 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:40.184 08:16:27 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:40.184 08:16:27 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:40.184 08:16:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.184 08:16:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.184 08:16:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.184 08:16:27 -- paths/export.sh@5 -- # export PATH 00:03:40.184 08:16:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.184 08:16:27 -- nvmf/common.sh@51 -- # : 0 00:03:40.184 08:16:27 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:40.184 08:16:27 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:40.184 08:16:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:40.184 08:16:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:40.184 08:16:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:40.184 08:16:27 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:40.184 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:40.184 08:16:27 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:40.184 08:16:27 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:40.184 08:16:27 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:40.184 08:16:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:40.184 08:16:27 -- spdk/autotest.sh@32 -- # uname -s 00:03:40.184 08:16:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:40.184 08:16:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:40.184 08:16:27 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:40.184 08:16:27 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:40.184 08:16:27 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:40.184 08:16:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:40.184 08:16:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:40.184 08:16:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:40.184 08:16:27 -- spdk/autotest.sh@48 -- # udevadm_pid=54384 00:03:40.184 08:16:27 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:40.184 08:16:27 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:40.184 08:16:27 -- pm/common@17 -- # local monitor 00:03:40.184 08:16:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.184 08:16:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.184 08:16:27 -- pm/common@25 -- # sleep 1 00:03:40.184 08:16:27 -- pm/common@21 -- # date +%s 00:03:40.184 08:16:27 -- pm/common@21 -- # date +%s 00:03:40.184 08:16:27 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732090587 00:03:40.185 08:16:27 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732090587 00:03:40.443 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732090587_collect-cpu-load.pm.log 00:03:40.443 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732090587_collect-vmstat.pm.log 00:03:41.380 08:16:28 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:41.380 08:16:28 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:41.380 08:16:28 -- common/autotest_common.sh@729 -- # xtrace_disable 00:03:41.380 08:16:28 -- common/autotest_common.sh@10 -- # set +x 00:03:41.380 08:16:28 -- spdk/autotest.sh@59 -- # create_test_list 00:03:41.380 08:16:28 -- common/autotest_common.sh@755 -- # xtrace_disable 00:03:41.380 08:16:28 -- common/autotest_common.sh@10 -- # set +x 00:03:41.380 08:16:28 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:41.380 08:16:28 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:41.380 08:16:28 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:41.380 08:16:28 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:41.380 08:16:28 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:41.380 08:16:28 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:41.380 08:16:28 -- common/autotest_common.sh@1445 -- # uname 00:03:41.380 08:16:28 -- common/autotest_common.sh@1445 -- # '[' Linux = FreeBSD ']' 00:03:41.380 08:16:28 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:41.380 08:16:28 -- common/autotest_common.sh@1465 -- # uname 00:03:41.380 08:16:28 -- common/autotest_common.sh@1465 -- # [[ Linux = FreeBSD ]] 00:03:41.380 08:16:28 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:41.380 08:16:28 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:41.380 lcov: LCOV version 1.15 00:03:41.380 08:16:28 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:59.511 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:59.511 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:14.389 08:17:01 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:14.389 08:17:01 -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:14.389 08:17:01 -- common/autotest_common.sh@10 -- # set +x 00:04:14.389 08:17:01 -- spdk/autotest.sh@78 -- # rm -f 00:04:14.389 08:17:01 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:15.324 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.324 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:15.324 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:15.324 08:17:02 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:15.324 08:17:02 -- common/autotest_common.sh@1602 -- # zoned_devs=() 00:04:15.324 08:17:02 -- common/autotest_common.sh@1602 -- # local -gA zoned_devs 00:04:15.324 08:17:02 -- common/autotest_common.sh@1603 -- # local nvme bdf 00:04:15.324 08:17:02 -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:04:15.324 08:17:02 -- common/autotest_common.sh@1606 -- # is_block_zoned nvme0n1 00:04:15.325 08:17:02 -- common/autotest_common.sh@1595 -- # local device=nvme0n1 00:04:15.325 08:17:02 -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:15.325 08:17:02 -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:04:15.325 08:17:02 -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:04:15.325 08:17:02 -- common/autotest_common.sh@1606 -- # is_block_zoned nvme1n1 00:04:15.325 08:17:02 -- common/autotest_common.sh@1595 -- # local device=nvme1n1 00:04:15.325 08:17:02 -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:15.325 08:17:02 -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:04:15.325 08:17:02 -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:04:15.325 08:17:02 -- common/autotest_common.sh@1606 -- # is_block_zoned nvme1n2 00:04:15.325 08:17:02 -- common/autotest_common.sh@1595 -- # local device=nvme1n2 00:04:15.325 08:17:02 -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:15.325 08:17:02 -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:04:15.325 08:17:02 -- common/autotest_common.sh@1605 -- # for nvme in /sys/block/nvme* 00:04:15.325 08:17:02 -- common/autotest_common.sh@1606 -- # is_block_zoned nvme1n3 00:04:15.325 08:17:02 -- common/autotest_common.sh@1595 -- # local device=nvme1n3 00:04:15.325 08:17:02 -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:15.325 08:17:02 -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:04:15.325 08:17:02 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:15.325 08:17:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.325 08:17:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.325 08:17:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:15.325 08:17:02 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:15.325 08:17:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:15.325 No valid GPT data, bailing 00:04:15.325 08:17:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:15.325 08:17:02 -- scripts/common.sh@394 -- # pt= 00:04:15.325 08:17:02 -- scripts/common.sh@395 -- # return 1 00:04:15.325 08:17:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:15.325 1+0 records in 00:04:15.325 1+0 records out 00:04:15.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00411532 s, 255 MB/s 00:04:15.325 08:17:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.325 08:17:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.325 08:17:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:15.325 08:17:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:15.325 08:17:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:15.325 No valid GPT data, bailing 00:04:15.325 08:17:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:15.325 08:17:02 -- scripts/common.sh@394 -- # pt= 00:04:15.325 08:17:02 -- scripts/common.sh@395 -- # return 1 00:04:15.325 08:17:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:15.325 1+0 records in 00:04:15.325 1+0 records out 00:04:15.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00390547 s, 268 MB/s 00:04:15.325 08:17:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.325 08:17:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.325 08:17:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:15.325 08:17:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:15.325 08:17:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:15.325 No valid GPT data, bailing 00:04:15.325 08:17:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:15.583 08:17:02 -- scripts/common.sh@394 -- # pt= 00:04:15.583 08:17:02 -- scripts/common.sh@395 -- # return 1 00:04:15.583 08:17:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:15.583 1+0 records in 00:04:15.583 1+0 records out 00:04:15.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00445398 s, 235 MB/s 00:04:15.583 08:17:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.583 08:17:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.583 08:17:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:15.583 08:17:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:15.583 08:17:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:15.583 No valid GPT data, bailing 00:04:15.583 08:17:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:15.583 08:17:02 -- scripts/common.sh@394 -- # pt= 00:04:15.583 08:17:02 -- scripts/common.sh@395 -- # return 1 00:04:15.583 08:17:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:15.583 1+0 records in 00:04:15.583 1+0 records out 00:04:15.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00348594 s, 301 MB/s 00:04:15.583 08:17:02 -- spdk/autotest.sh@105 -- # sync 00:04:15.583 08:17:03 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:15.583 08:17:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:15.583 08:17:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:17.497 08:17:04 -- spdk/autotest.sh@111 -- # uname -s 00:04:17.497 08:17:04 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:17.497 08:17:04 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:17.497 08:17:04 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:18.082 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.082 Hugepages 00:04:18.082 node hugesize free / total 00:04:18.082 node0 1048576kB 0 / 0 00:04:18.082 node0 2048kB 0 / 0 00:04:18.082 00:04:18.082 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:18.343 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:18.343 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:18.343 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:18.343 08:17:05 -- spdk/autotest.sh@117 -- # uname -s 00:04:18.343 08:17:05 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:18.343 08:17:05 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:18.343 08:17:05 -- nvme/functions.sh@217 -- # scan_nvme_ctrls 00:04:18.343 08:17:05 -- nvme/functions.sh@47 -- # local ctrl ctrl_dev reg val ns pci 00:04:18.343 08:17:05 -- nvme/functions.sh@49 -- # for ctrl in /sys/class/nvme/nvme* 00:04:18.343 08:17:05 -- nvme/functions.sh@50 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:04:18.343 08:17:05 -- nvme/functions.sh@51 -- # pci=0000:00:10.0 00:04:18.343 08:17:05 -- nvme/functions.sh@52 -- # pci_can_use 0000:00:10.0 00:04:18.343 08:17:05 -- scripts/common.sh@18 -- # local i 00:04:18.343 08:17:05 -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:04:18.343 08:17:05 -- scripts/common.sh@25 -- # [[ -z '' ]] 00:04:18.343 08:17:05 -- scripts/common.sh@27 -- # return 0 00:04:18.343 08:17:05 -- nvme/functions.sh@53 -- # ctrl_dev=nvme0 00:04:18.343 08:17:05 -- nvme/functions.sh@54 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:04:18.343 08:17:05 -- nvme/functions.sh@19 -- # local ref=nvme0 reg val 00:04:18.343 08:17:05 -- nvme/functions.sh@20 -- # shift 00:04:18.343 08:17:05 -- nvme/functions.sh@22 -- # local -gA 'nvme0=()' 00:04:18.343 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.343 08:17:05 -- nvme/functions.sh@18 -- # nvme id-ctrl /dev/nvme0 00:04:18.343 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.343 08:17:05 -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:04:18.343 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.343 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.343 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x1b36 ]] 00:04:18.343 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[vid]="0x1b36"' 00:04:18.343 08:17:05 -- nvme/functions.sh@25 -- # nvme0[vid]=0x1b36 00:04:18.343 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.343 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.343 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x1af4 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[ssvid]="0x1af4"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[ssvid]=0x1af4 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 12340 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[sn]="12340 "' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[sn]='12340 ' 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n QEMU NVMe Ctrl ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 8.0.0 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[fr]="8.0.0 "' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[fr]='8.0.0 ' 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 6 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[rab]="6"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[rab]=6 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 525400 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[ieee]="525400"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[ieee]=525400 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[cmic]="0"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[cmic]=0 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[mdts]="7"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[mdts]=7 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[cntlid]="0"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[cntlid]=0 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x10400 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[ver]="0x10400"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[ver]=0x10400 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[rtd3r]="0"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[rtd3r]=0 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[rtd3e]="0"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[rtd3e]=0 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x100 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[oaes]="0x100"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[oaes]=0x100 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x8000 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[ctratt]="0x8000"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[ctratt]=0x8000 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[rrls]="0"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[rrls]=0 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[cntrltype]="1"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[cntrltype]=1 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[crdt1]="0"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[crdt1]=0 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[crdt2]="0"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[crdt2]=0 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[crdt3]="0"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[crdt3]=0 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[nvmsr]="0"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[nvmsr]=0 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[vwci]="0"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[vwci]=0 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[mec]="0"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[mec]=0 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x12a ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[oacs]="0x12a"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[oacs]=0x12a 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[acl]="3"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[acl]=3 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[aerl]="3"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[aerl]=3 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[frmw]="0x3"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[frmw]=0x3 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[lpa]="0x7"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[lpa]=0x7 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[elpe]="0"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[elpe]=0 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[npss]="0"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[npss]=0 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[avscc]="0"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[avscc]=0 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[apsta]="0"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[apsta]=0 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 343 ]] 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[wctemp]="343"' 00:04:18.344 08:17:05 -- nvme/functions.sh@25 -- # nvme0[wctemp]=343 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.344 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.344 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 373 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[cctemp]="373"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[cctemp]=373 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[mtfa]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[mtfa]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[hmpre]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[hmpre]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[hmmin]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[hmmin]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[tnvmcap]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[tnvmcap]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[unvmcap]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[unvmcap]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[rpmbs]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[rpmbs]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[edstt]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[edstt]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[dsto]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[dsto]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[fwug]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[fwug]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[kas]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[kas]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[hctma]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[hctma]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[mntmt]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[mntmt]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[mxtmt]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[mxtmt]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[sanicap]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[sanicap]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[hmminds]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[hmminds]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[hmmaxd]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[hmmaxd]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[nsetidmax]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[nsetidmax]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[endgidmax]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[endgidmax]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[anatt]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[anatt]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[anacap]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[anacap]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[anagrpmax]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[anagrpmax]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[nanagrpid]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[nanagrpid]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[pels]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[pels]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[domainid]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[domainid]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[megcap]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[megcap]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x66 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[sqes]="0x66"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[sqes]=0x66 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x44 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[cqes]="0x44"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[cqes]=0x44 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[maxcmd]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[maxcmd]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 256 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[nn]="256"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[nn]=256 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x15d ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[oncs]="0x15d"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[oncs]=0x15d 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[fuses]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[fuses]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[fna]="0"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[fna]=0 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.345 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.345 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[vwc]="0x7"' 00:04:18.345 08:17:05 -- nvme/functions.sh@25 -- # nvme0[vwc]=0x7 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[awun]="0"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0[awun]=0 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[awupf]="0"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0[awupf]=0 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[icsvscc]="0"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0[icsvscc]=0 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[nwpc]="0"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0[nwpc]=0 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[acwu]="0"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0[acwu]=0 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[ocfs]="0x3"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0[ocfs]=0x3 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x1 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[sgls]="0x1"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0[sgls]=0x1 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[mnan]="0"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0[mnan]=0 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[maxdna]="0"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0[maxdna]=0 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[maxcna]="0"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0[maxcna]=0 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[oaqd]="0"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0[oaqd]=0 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[ioccsz]="0"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0[ioccsz]=0 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[iorcsz]="0"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0[iorcsz]=0 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[icdoff]="0"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0[icdoff]=0 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[fcatt]="0"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0[fcatt]=0 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[msdbd]="0"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0[msdbd]=0 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[ofcs]="0"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0[ofcs]=0 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n - ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0[active_power_workload]="-"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0[active_power_workload]=- 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@55 -- # local -n _ctrl_ns=nvme0_ns 00:04:18.346 08:17:05 -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:04:18.346 08:17:05 -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@58 -- # ns_dev=nvme0n1 00:04:18.346 08:17:05 -- nvme/functions.sh@59 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:04:18.346 08:17:05 -- nvme/functions.sh@19 -- # local ref=nvme0n1 reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@20 -- # shift 00:04:18.346 08:17:05 -- nvme/functions.sh@22 -- # local -gA 'nvme0n1=()' 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@18 -- # nvme id-ns /dev/nvme0n1 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x140000 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nsze]="0x140000"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[nsze]=0x140000 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x140000 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[ncap]="0x140000"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[ncap]=0x140000 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x140000 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nuse]="0x140000"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[nuse]=0x140000 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x14 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[nsfeat]=0x14 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nlbaf]="7"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[nlbaf]=7 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x4 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[flbas]="0x4"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[flbas]=0x4 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[mc]="0x3"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[mc]=0x3 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x1f ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[dpc]="0x1f"' 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[dpc]=0x1f 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.346 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.346 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.346 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[dps]="0"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[dps]=0 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.347 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nmic]="0"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[nmic]=0 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.347 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[rescap]="0"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[rescap]=0 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.347 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[fpi]="0"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[fpi]=0 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.347 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[dlfeat]="1"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[dlfeat]=1 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.347 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nawun]="0"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[nawun]=0 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.347 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nawupf]="0"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[nawupf]=0 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.347 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nacwu]="0"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[nacwu]=0 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.347 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nabsn]="0"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[nabsn]=0 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.347 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nabo]="0"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[nabo]=0 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.347 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nabspf]="0"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[nabspf]=0 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.347 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[noiob]="0"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[noiob]=0 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.347 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nvmcap]="0"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[nvmcap]=0 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.347 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[npwg]="0"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[npwg]=0 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.347 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[npwa]="0"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[npwa]=0 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.347 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[npdg]="0"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[npdg]=0 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.347 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[npda]="0"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[npda]=0 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.347 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nows]="0"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[nows]=0 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.347 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[mssrl]="128"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[mssrl]=128 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.347 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[mcl]="128"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[mcl]=128 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.347 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 127 ]] 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[msrc]="127"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[msrc]=127 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.347 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nulbaf]="0"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[nulbaf]=0 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.347 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[anagrpid]="0"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[anagrpid]=0 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.347 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nsattr]="0"' 00:04:18.347 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[nsattr]=0 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.347 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.610 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nvmsetid]="0"' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[nvmsetid]=0 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[endgid]="0"' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[endgid]=0 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 00000000000000000000000000000000 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[eui64]=0000000000000000 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:04:18.611 08:17:05 -- nvme/functions.sh@62 -- # ctrls_g["$ctrl_dev"]=nvme0 00:04:18.611 08:17:05 -- nvme/functions.sh@63 -- # nvmes_g["$ctrl_dev"]=nvme0_ns 00:04:18.611 08:17:05 -- nvme/functions.sh@64 -- # bdfs_g["$ctrl_dev"]=0000:00:10.0 00:04:18.611 08:17:05 -- nvme/functions.sh@65 -- # ordered_ctrls_g[${ctrl_dev/nvme/}]=nvme0 00:04:18.611 08:17:05 -- nvme/functions.sh@49 -- # for ctrl in /sys/class/nvme/nvme* 00:04:18.611 08:17:05 -- nvme/functions.sh@50 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@51 -- # pci=0000:00:11.0 00:04:18.611 08:17:05 -- nvme/functions.sh@52 -- # pci_can_use 0000:00:11.0 00:04:18.611 08:17:05 -- scripts/common.sh@18 -- # local i 00:04:18.611 08:17:05 -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:04:18.611 08:17:05 -- scripts/common.sh@25 -- # [[ -z '' ]] 00:04:18.611 08:17:05 -- scripts/common.sh@27 -- # return 0 00:04:18.611 08:17:05 -- nvme/functions.sh@53 -- # ctrl_dev=nvme1 00:04:18.611 08:17:05 -- nvme/functions.sh@54 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:04:18.611 08:17:05 -- nvme/functions.sh@19 -- # local ref=nvme1 reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@20 -- # shift 00:04:18.611 08:17:05 -- nvme/functions.sh@22 -- # local -gA 'nvme1=()' 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@18 -- # nvme id-ctrl /dev/nvme1 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x1b36 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[vid]="0x1b36"' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme1[vid]=0x1b36 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x1af4 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[ssvid]="0x1af4"' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme1[ssvid]=0x1af4 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 12341 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[sn]="12341 "' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme1[sn]='12341 ' 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n QEMU NVMe Ctrl ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 8.0.0 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[fr]="8.0.0 "' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme1[fr]='8.0.0 ' 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 6 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[rab]="6"' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme1[rab]=6 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 525400 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[ieee]="525400"' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme1[ieee]=525400 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[cmic]="0"' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme1[cmic]=0 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[mdts]="7"' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme1[mdts]=7 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[cntlid]="0"' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme1[cntlid]=0 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x10400 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[ver]="0x10400"' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme1[ver]=0x10400 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[rtd3r]="0"' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme1[rtd3r]=0 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[rtd3e]="0"' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme1[rtd3e]=0 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.611 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.611 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x100 ]] 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[oaes]="0x100"' 00:04:18.611 08:17:05 -- nvme/functions.sh@25 -- # nvme1[oaes]=0x100 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x8000 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[ctratt]="0x8000"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[ctratt]=0x8000 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[rrls]="0"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[rrls]=0 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[cntrltype]="1"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[cntrltype]=1 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[crdt1]="0"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[crdt1]=0 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[crdt2]="0"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[crdt2]=0 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[crdt3]="0"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[crdt3]=0 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[nvmsr]="0"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[nvmsr]=0 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[vwci]="0"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[vwci]=0 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[mec]="0"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[mec]=0 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x12a ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[oacs]="0x12a"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[oacs]=0x12a 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[acl]="3"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[acl]=3 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 3 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[aerl]="3"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[aerl]=3 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[frmw]="0x3"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[frmw]=0x3 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[lpa]="0x7"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[lpa]=0x7 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[elpe]="0"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[elpe]=0 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[npss]="0"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[npss]=0 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[avscc]="0"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[avscc]=0 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[apsta]="0"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[apsta]=0 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 343 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[wctemp]="343"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[wctemp]=343 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 373 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[cctemp]="373"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[cctemp]=373 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[mtfa]="0"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[mtfa]=0 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[hmpre]="0"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[hmpre]=0 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[hmmin]="0"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[hmmin]=0 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[tnvmcap]="0"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[tnvmcap]=0 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[unvmcap]="0"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[unvmcap]=0 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[rpmbs]="0"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[rpmbs]=0 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[edstt]="0"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[edstt]=0 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[dsto]="0"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[dsto]=0 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[fwug]="0"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[fwug]=0 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[kas]="0"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[kas]=0 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[hctma]="0"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[hctma]=0 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.612 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[mntmt]="0"' 00:04:18.612 08:17:05 -- nvme/functions.sh@25 -- # nvme1[mntmt]=0 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.612 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[mxtmt]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[mxtmt]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[sanicap]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[sanicap]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[hmminds]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[hmminds]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[hmmaxd]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[hmmaxd]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[nsetidmax]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[nsetidmax]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[endgidmax]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[endgidmax]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[anatt]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[anatt]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[anacap]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[anacap]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[anagrpmax]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[anagrpmax]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[nanagrpid]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[nanagrpid]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[pels]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[pels]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[domainid]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[domainid]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[megcap]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[megcap]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x66 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[sqes]="0x66"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[sqes]=0x66 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x44 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[cqes]="0x44"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[cqes]=0x44 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[maxcmd]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[maxcmd]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 256 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[nn]="256"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[nn]=256 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x15d ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[oncs]="0x15d"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[oncs]=0x15d 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[fuses]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[fuses]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[fna]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[fna]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x7 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[vwc]="0x7"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[vwc]=0x7 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[awun]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[awun]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[awupf]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[awupf]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[icsvscc]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[icsvscc]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[nwpc]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[nwpc]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[acwu]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[acwu]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[ocfs]="0x3"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[ocfs]=0x3 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x1 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[sgls]="0x1"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[sgls]=0x1 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[mnan]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[mnan]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[maxdna]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[maxdna]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[maxcna]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[maxcna]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[oaqd]="0"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[oaqd]=0 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12341"' 00:04:18.613 08:17:05 -- nvme/functions.sh@25 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12341 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.613 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.613 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[ioccsz]="0"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1[ioccsz]=0 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[iorcsz]="0"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1[iorcsz]=0 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[icdoff]="0"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1[icdoff]=0 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[fcatt]="0"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1[fcatt]=0 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[msdbd]="0"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1[msdbd]=0 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[ofcs]="0"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1[ofcs]=0 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n - ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1[active_power_workload]="-"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1[active_power_workload]=- 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@55 -- # local -n _ctrl_ns=nvme1_ns 00:04:18.614 08:17:05 -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:04:18.614 08:17:05 -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@58 -- # ns_dev=nvme1n1 00:04:18.614 08:17:05 -- nvme/functions.sh@59 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:04:18.614 08:17:05 -- nvme/functions.sh@19 -- # local ref=nvme1n1 reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@20 -- # shift 00:04:18.614 08:17:05 -- nvme/functions.sh@22 -- # local -gA 'nvme1n1=()' 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@18 -- # nvme id-ns /dev/nvme1n1 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nsze]="0x100000"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[nsze]=0x100000 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[ncap]="0x100000"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[ncap]=0x100000 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nuse]="0x100000"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[nuse]=0x100000 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x14 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[nsfeat]=0x14 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nlbaf]="7"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[nlbaf]=7 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x4 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[flbas]="0x4"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[flbas]=0x4 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[mc]="0x3"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[mc]=0x3 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0x1f ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[dpc]="0x1f"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[dpc]=0x1f 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[dps]="0"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[dps]=0 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nmic]="0"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[nmic]=0 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[rescap]="0"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[rescap]=0 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[fpi]="0"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[fpi]=0 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[dlfeat]="1"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[dlfeat]=1 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nawun]="0"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[nawun]=0 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nawupf]="0"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[nawupf]=0 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nacwu]="0"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[nacwu]=0 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nabsn]="0"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[nabsn]=0 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nabo]="0"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[nabo]=0 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nabspf]="0"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[nabspf]=0 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[noiob]="0"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[noiob]=0 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nvmcap]="0"' 00:04:18.614 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[nvmcap]=0 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.614 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.614 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.615 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[npwg]="0"' 00:04:18.615 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[npwg]=0 00:04:18.615 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.615 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[npwa]="0"' 00:04:18.615 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[npwa]=0 00:04:18.615 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.615 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[npdg]="0"' 00:04:18.615 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[npdg]=0 00:04:18.615 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.615 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[npda]="0"' 00:04:18.615 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[npda]=0 00:04:18.615 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.615 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nows]="0"' 00:04:18.615 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[nows]=0 00:04:18.615 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:04:18.615 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[mssrl]="128"' 00:04:18.615 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[mssrl]=128 00:04:18.615 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:04:18.615 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[mcl]="128"' 00:04:18.615 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[mcl]=128 00:04:18.615 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 127 ]] 00:04:18.615 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[msrc]="127"' 00:04:18.615 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[msrc]=127 00:04:18.615 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.615 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nulbaf]="0"' 00:04:18.615 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[nulbaf]=0 00:04:18.615 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.615 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[anagrpid]="0"' 00:04:18.615 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[anagrpid]=0 00:04:18.615 08:17:05 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:05 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:05 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.615 08:17:05 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nsattr]="0"' 00:04:18.615 08:17:05 -- nvme/functions.sh@25 -- # nvme1n1[nsattr]=0 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nvmsetid]="0"' 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # nvme1n1[nvmsetid]=0 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n1[endgid]="0"' 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # nvme1n1[endgid]=0 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 00000000000000000000000000000000 ]] 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # nvme1n1[eui64]=0000000000000000 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:06 -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:04:18.615 08:17:06 -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:04:18.615 08:17:06 -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:04:18.615 08:17:06 -- nvme/functions.sh@58 -- # ns_dev=nvme1n2 00:04:18.615 08:17:06 -- nvme/functions.sh@59 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:04:18.615 08:17:06 -- nvme/functions.sh@19 -- # local ref=nvme1n2 reg val 00:04:18.615 08:17:06 -- nvme/functions.sh@20 -- # shift 00:04:18.615 08:17:06 -- nvme/functions.sh@22 -- # local -gA 'nvme1n2=()' 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:06 -- nvme/functions.sh@18 -- # nvme id-ns /dev/nvme1n2 00:04:18.615 08:17:06 -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nsze]="0x100000"' 00:04:18.615 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[nsze]=0x100000 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.615 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.615 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[ncap]="0x100000"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[ncap]=0x100000 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nuse]="0x100000"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[nuse]=0x100000 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0x14 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[nsfeat]=0x14 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nlbaf]="7"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[nlbaf]=7 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0x4 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[flbas]="0x4"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[flbas]=0x4 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[mc]="0x3"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[mc]=0x3 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0x1f ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[dpc]="0x1f"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[dpc]=0x1f 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[dps]="0"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[dps]=0 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nmic]="0"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[nmic]=0 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[rescap]="0"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[rescap]=0 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[fpi]="0"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[fpi]=0 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[dlfeat]="1"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[dlfeat]=1 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nawun]="0"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[nawun]=0 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nawupf]="0"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[nawupf]=0 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nacwu]="0"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[nacwu]=0 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nabsn]="0"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[nabsn]=0 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nabo]="0"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[nabo]=0 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nabspf]="0"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[nabspf]=0 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[noiob]="0"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[noiob]=0 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nvmcap]="0"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[nvmcap]=0 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[npwg]="0"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[npwg]=0 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[npwa]="0"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[npwa]=0 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[npdg]="0"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[npdg]=0 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[npda]="0"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[npda]=0 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nows]="0"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[nows]=0 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[mssrl]="128"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[mssrl]=128 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[mcl]="128"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[mcl]=128 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 127 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[msrc]="127"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[msrc]=127 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nulbaf]="0"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[nulbaf]=0 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[anagrpid]="0"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[anagrpid]=0 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nsattr]="0"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[nsattr]=0 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nvmsetid]="0"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[nvmsetid]=0 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[endgid]="0"' 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[endgid]=0 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.616 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.616 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 00000000000000000000000000000000 ]] 00:04:18.616 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[eui64]=0000000000000000 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:04:18.617 08:17:06 -- nvme/functions.sh@56 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:04:18.617 08:17:06 -- nvme/functions.sh@57 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@58 -- # ns_dev=nvme1n3 00:04:18.617 08:17:06 -- nvme/functions.sh@59 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:04:18.617 08:17:06 -- nvme/functions.sh@19 -- # local ref=nvme1n3 reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@20 -- # shift 00:04:18.617 08:17:06 -- nvme/functions.sh@22 -- # local -gA 'nvme1n3=()' 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@18 -- # nvme id-ns /dev/nvme1n3 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n '' ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nsze]="0x100000"' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[nsze]=0x100000 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[ncap]="0x100000"' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[ncap]=0x100000 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0x100000 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nuse]="0x100000"' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[nuse]=0x100000 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0x14 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[nsfeat]=0x14 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 7 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nlbaf]="7"' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[nlbaf]=7 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0x4 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[flbas]="0x4"' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[flbas]=0x4 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0x3 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[mc]="0x3"' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[mc]=0x3 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0x1f ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[dpc]="0x1f"' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[dpc]=0x1f 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[dps]="0"' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[dps]=0 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nmic]="0"' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[nmic]=0 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[rescap]="0"' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[rescap]=0 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[fpi]="0"' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[fpi]=0 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 1 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[dlfeat]="1"' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[dlfeat]=1 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nawun]="0"' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[nawun]=0 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nawupf]="0"' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[nawupf]=0 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nacwu]="0"' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[nacwu]=0 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nabsn]="0"' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[nabsn]=0 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nabo]="0"' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[nabo]=0 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nabspf]="0"' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[nabspf]=0 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.617 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[noiob]="0"' 00:04:18.617 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[noiob]=0 00:04:18.617 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nvmcap]="0"' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[nvmcap]=0 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[npwg]="0"' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[npwg]=0 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[npwa]="0"' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[npwa]=0 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[npdg]="0"' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[npdg]=0 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[npda]="0"' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[npda]=0 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nows]="0"' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[nows]=0 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[mssrl]="128"' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[mssrl]=128 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 128 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[mcl]="128"' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[mcl]=128 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 127 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[msrc]="127"' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[msrc]=127 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nulbaf]="0"' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[nulbaf]=0 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[anagrpid]="0"' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[anagrpid]=0 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nsattr]="0"' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[nsattr]=0 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nvmsetid]="0"' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[nvmsetid]=0 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[endgid]="0"' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[endgid]=0 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 00000000000000000000000000000000 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n 0000000000000000 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[eui64]=0000000000000000 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@24 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:04:18.618 08:17:06 -- nvme/functions.sh@25 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # IFS=: 00:04:18.618 08:17:06 -- nvme/functions.sh@23 -- # read -r reg val 00:04:18.618 08:17:06 -- nvme/functions.sh@60 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:04:18.618 08:17:06 -- nvme/functions.sh@62 -- # ctrls_g["$ctrl_dev"]=nvme1 00:04:18.618 08:17:06 -- nvme/functions.sh@63 -- # nvmes_g["$ctrl_dev"]=nvme1_ns 00:04:18.618 08:17:06 -- nvme/functions.sh@64 -- # bdfs_g["$ctrl_dev"]=0000:00:11.0 00:04:18.618 08:17:06 -- nvme/functions.sh@65 -- # ordered_ctrls_g[${ctrl_dev/nvme/}]=nvme1 00:04:18.618 08:17:06 -- nvme/functions.sh@67 -- # (( 2 > 0 )) 00:04:18.618 08:17:06 -- nvme/functions.sh@219 -- # local _ctrls ctrl 00:04:18.618 08:17:06 -- nvme/functions.sh@220 -- # local unvmcap tnvmcap cntlid size blksize=512 00:04:18.618 08:17:06 -- nvme/functions.sh@222 -- # _ctrls=($(get_nvme_with_ns_management)) 00:04:18.618 08:17:06 -- nvme/functions.sh@222 -- # get_nvme_with_ns_management 00:04:18.618 08:17:06 -- nvme/functions.sh@157 -- # local _ctrls 00:04:18.618 08:17:06 -- nvme/functions.sh@159 -- # _ctrls=($(get_nvmes_with_ns_management)) 00:04:18.618 08:17:06 -- nvme/functions.sh@159 -- # get_nvmes_with_ns_management 00:04:18.618 08:17:06 -- nvme/functions.sh@146 -- # (( 2 == 0 )) 00:04:18.618 08:17:06 -- nvme/functions.sh@148 -- # local ctrl 00:04:18.618 08:17:06 -- nvme/functions.sh@149 -- # for ctrl in "${!ctrls_g[@]}" 00:04:18.618 08:17:06 -- nvme/functions.sh@150 -- # get_oacs nvme1 nsmgt 00:04:18.618 08:17:06 -- nvme/functions.sh@123 -- # local ctrl=nvme1 bit=nsmgt 00:04:18.618 08:17:06 -- nvme/functions.sh@124 -- # local -A bits 00:04:18.618 08:17:06 -- nvme/functions.sh@127 -- # bits["ss/sr"]=1 00:04:18.618 08:17:06 -- nvme/functions.sh@128 -- # bits["fnvme"]=2 00:04:18.618 08:17:06 -- nvme/functions.sh@129 -- # bits["fc/fi"]=4 00:04:18.618 08:17:06 -- nvme/functions.sh@130 -- # bits["nsmgt"]=8 00:04:18.618 08:17:06 -- nvme/functions.sh@131 -- # bits["self-test"]=16 00:04:18.618 08:17:06 -- nvme/functions.sh@132 -- # bits["directives"]=32 00:04:18.618 08:17:06 -- nvme/functions.sh@133 -- # bits["nvme-mi-s/r"]=64 00:04:18.618 08:17:06 -- nvme/functions.sh@134 -- # bits["virtmgt"]=128 00:04:18.618 08:17:06 -- nvme/functions.sh@135 -- # bits["doorbellbuf"]=256 00:04:18.618 08:17:06 -- nvme/functions.sh@136 -- # bits["getlba"]=512 00:04:18.618 08:17:06 -- nvme/functions.sh@137 -- # bits["commfeatlock"]=1024 00:04:18.618 08:17:06 -- nvme/functions.sh@139 -- # bit=nsmgt 00:04:18.618 08:17:06 -- nvme/functions.sh@140 -- # [[ -n 8 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@142 -- # get_nvme_ctrl_feature nvme1 oacs 00:04:18.618 08:17:06 -- nvme/functions.sh@71 -- # local ctrl=nvme1 reg=oacs 00:04:18.618 08:17:06 -- nvme/functions.sh@73 -- # [[ -n nvme1 ]] 00:04:18.618 08:17:06 -- nvme/functions.sh@75 -- # local -n _ctrl=nvme1 00:04:18.619 08:17:06 -- nvme/functions.sh@77 -- # [[ -n 0x12a ]] 00:04:18.619 08:17:06 -- nvme/functions.sh@78 -- # echo 0x12a 00:04:18.619 08:17:06 -- nvme/functions.sh@142 -- # (( 0x12a & bits[nsmgt] )) 00:04:18.619 08:17:06 -- nvme/functions.sh@150 -- # echo nvme1 00:04:18.619 08:17:06 -- nvme/functions.sh@149 -- # for ctrl in "${!ctrls_g[@]}" 00:04:18.619 08:17:06 -- nvme/functions.sh@150 -- # get_oacs nvme0 nsmgt 00:04:18.619 08:17:06 -- nvme/functions.sh@123 -- # local ctrl=nvme0 bit=nsmgt 00:04:18.619 08:17:06 -- nvme/functions.sh@124 -- # local -A bits 00:04:18.619 08:17:06 -- nvme/functions.sh@127 -- # bits["ss/sr"]=1 00:04:18.619 08:17:06 -- nvme/functions.sh@128 -- # bits["fnvme"]=2 00:04:18.619 08:17:06 -- nvme/functions.sh@129 -- # bits["fc/fi"]=4 00:04:18.619 08:17:06 -- nvme/functions.sh@130 -- # bits["nsmgt"]=8 00:04:18.619 08:17:06 -- nvme/functions.sh@131 -- # bits["self-test"]=16 00:04:18.619 08:17:06 -- nvme/functions.sh@132 -- # bits["directives"]=32 00:04:18.619 08:17:06 -- nvme/functions.sh@133 -- # bits["nvme-mi-s/r"]=64 00:04:18.619 08:17:06 -- nvme/functions.sh@134 -- # bits["virtmgt"]=128 00:04:18.619 08:17:06 -- nvme/functions.sh@135 -- # bits["doorbellbuf"]=256 00:04:18.619 08:17:06 -- nvme/functions.sh@136 -- # bits["getlba"]=512 00:04:18.619 08:17:06 -- nvme/functions.sh@137 -- # bits["commfeatlock"]=1024 00:04:18.619 08:17:06 -- nvme/functions.sh@139 -- # bit=nsmgt 00:04:18.619 08:17:06 -- nvme/functions.sh@140 -- # [[ -n 8 ]] 00:04:18.619 08:17:06 -- nvme/functions.sh@142 -- # get_nvme_ctrl_feature nvme0 oacs 00:04:18.619 08:17:06 -- nvme/functions.sh@71 -- # local ctrl=nvme0 reg=oacs 00:04:18.619 08:17:06 -- nvme/functions.sh@73 -- # [[ -n nvme0 ]] 00:04:18.619 08:17:06 -- nvme/functions.sh@75 -- # local -n _ctrl=nvme0 00:04:18.619 08:17:06 -- nvme/functions.sh@77 -- # [[ -n 0x12a ]] 00:04:18.619 08:17:06 -- nvme/functions.sh@78 -- # echo 0x12a 00:04:18.619 08:17:06 -- nvme/functions.sh@142 -- # (( 0x12a & bits[nsmgt] )) 00:04:18.619 08:17:06 -- nvme/functions.sh@150 -- # echo nvme0 00:04:18.619 08:17:06 -- nvme/functions.sh@153 -- # return 0 00:04:18.619 08:17:06 -- nvme/functions.sh@160 -- # (( 2 > 0 )) 00:04:18.619 08:17:06 -- nvme/functions.sh@161 -- # echo nvme1 00:04:18.619 08:17:06 -- nvme/functions.sh@162 -- # return 0 00:04:18.619 08:17:06 -- nvme/functions.sh@224 -- # for ctrl in "${_ctrls[@]}" 00:04:18.619 08:17:06 -- nvme/functions.sh@229 -- # get_nvme_ctrl_feature nvme1 unvmcap 00:04:18.619 08:17:06 -- nvme/functions.sh@71 -- # local ctrl=nvme1 reg=unvmcap 00:04:18.619 08:17:06 -- nvme/functions.sh@73 -- # [[ -n nvme1 ]] 00:04:18.619 08:17:06 -- nvme/functions.sh@75 -- # local -n _ctrl=nvme1 00:04:18.619 08:17:06 -- nvme/functions.sh@77 -- # [[ -n 0 ]] 00:04:18.619 08:17:06 -- nvme/functions.sh@78 -- # echo 0 00:04:18.619 08:17:06 -- nvme/functions.sh@229 -- # unvmcap=0 00:04:18.619 08:17:06 -- nvme/functions.sh@230 -- # get_nvme_ctrl_feature nvme1 tnvmcap 00:04:18.619 08:17:06 -- nvme/functions.sh@71 -- # local ctrl=nvme1 reg=tnvmcap 00:04:18.619 08:17:06 -- nvme/functions.sh@73 -- # [[ -n nvme1 ]] 00:04:18.619 08:17:06 -- nvme/functions.sh@75 -- # local -n _ctrl=nvme1 00:04:18.619 08:17:06 -- nvme/functions.sh@77 -- # [[ -n 0 ]] 00:04:18.619 08:17:06 -- nvme/functions.sh@78 -- # echo 0 00:04:18.619 08:17:06 -- nvme/functions.sh@230 -- # tnvmcap=0 00:04:18.619 08:17:06 -- nvme/functions.sh@231 -- # get_nvme_ctrl_feature nvme1 cntlid 00:04:18.619 08:17:06 -- nvme/functions.sh@71 -- # local ctrl=nvme1 reg=cntlid 00:04:18.619 08:17:06 -- nvme/functions.sh@73 -- # [[ -n nvme1 ]] 00:04:18.619 08:17:06 -- nvme/functions.sh@75 -- # local -n _ctrl=nvme1 00:04:18.619 08:17:06 -- nvme/functions.sh@77 -- # [[ -n 0 ]] 00:04:18.619 08:17:06 -- nvme/functions.sh@78 -- # echo 0 00:04:18.619 08:17:06 -- nvme/functions.sh@231 -- # cntlid=0 00:04:18.619 08:17:06 -- nvme/functions.sh@232 -- # (( unvmcap == 0 )) 00:04:18.619 08:17:06 -- nvme/functions.sh@234 -- # continue 00:04:18.619 08:17:06 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:18.619 08:17:06 -- common/autotest_common.sh@735 -- # xtrace_disable 00:04:18.619 08:17:06 -- common/autotest_common.sh@10 -- # set +x 00:04:18.619 08:17:06 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:18.619 08:17:06 -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:18.619 08:17:06 -- common/autotest_common.sh@10 -- # set +x 00:04:18.619 08:17:06 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:19.556 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.556 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.556 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.556 08:17:07 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:19.556 08:17:07 -- common/autotest_common.sh@735 -- # xtrace_disable 00:04:19.556 08:17:07 -- common/autotest_common.sh@10 -- # set +x 00:04:19.556 08:17:07 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:19.556 08:17:07 -- common/autotest_common.sh@1521 -- # local bdfs bdf bdf_id 00:04:19.556 08:17:07 -- common/autotest_common.sh@1523 -- # mapfile -t bdfs 00:04:19.556 08:17:07 -- common/autotest_common.sh@1523 -- # get_nvme_bdfs_by_id 0x0a54 00:04:19.556 08:17:07 -- common/autotest_common.sh@1505 -- # bdfs=() 00:04:19.556 08:17:07 -- common/autotest_common.sh@1505 -- # _bdfs=() 00:04:19.556 08:17:07 -- common/autotest_common.sh@1505 -- # local bdfs _bdfs bdf 00:04:19.556 08:17:07 -- common/autotest_common.sh@1506 -- # _bdfs=($(get_nvme_bdfs)) 00:04:19.556 08:17:07 -- common/autotest_common.sh@1506 -- # get_nvme_bdfs 00:04:19.556 08:17:07 -- common/autotest_common.sh@1486 -- # bdfs=() 00:04:19.556 08:17:07 -- common/autotest_common.sh@1486 -- # local bdfs 00:04:19.556 08:17:07 -- common/autotest_common.sh@1487 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:19.556 08:17:07 -- common/autotest_common.sh@1487 -- # jq -r '.config[].params.traddr' 00:04:19.556 08:17:07 -- common/autotest_common.sh@1487 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:19.815 08:17:07 -- common/autotest_common.sh@1488 -- # (( 2 == 0 )) 00:04:19.815 08:17:07 -- common/autotest_common.sh@1492 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:19.815 08:17:07 -- common/autotest_common.sh@1508 -- # for bdf in "${_bdfs[@]}" 00:04:19.815 08:17:07 -- common/autotest_common.sh@1509 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:19.815 08:17:07 -- common/autotest_common.sh@1509 -- # device=0x0010 00:04:19.815 08:17:07 -- common/autotest_common.sh@1510 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:19.815 08:17:07 -- common/autotest_common.sh@1508 -- # for bdf in "${_bdfs[@]}" 00:04:19.815 08:17:07 -- common/autotest_common.sh@1509 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:19.815 08:17:07 -- common/autotest_common.sh@1509 -- # device=0x0010 00:04:19.815 08:17:07 -- common/autotest_common.sh@1510 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:19.815 08:17:07 -- common/autotest_common.sh@1515 -- # (( 0 > 0 )) 00:04:19.815 08:17:07 -- common/autotest_common.sh@1515 -- # return 0 00:04:19.815 08:17:07 -- common/autotest_common.sh@1524 -- # [[ -z '' ]] 00:04:19.815 08:17:07 -- common/autotest_common.sh@1525 -- # return 0 00:04:19.815 08:17:07 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:19.815 08:17:07 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:19.815 08:17:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:19.815 08:17:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:19.815 08:17:07 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:19.815 08:17:07 -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:19.815 08:17:07 -- common/autotest_common.sh@10 -- # set +x 00:04:19.815 08:17:07 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:19.815 08:17:07 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:19.815 08:17:07 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:19.815 08:17:07 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:19.815 08:17:07 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:04:19.815 08:17:07 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:19.815 08:17:07 -- common/autotest_common.sh@10 -- # set +x 00:04:19.815 ************************************ 00:04:19.815 START TEST env 00:04:19.815 ************************************ 00:04:19.815 08:17:07 env -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:19.815 * Looking for test storage... 00:04:19.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:19.815 08:17:07 env -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:04:19.815 08:17:07 env -- common/autotest_common.sh@1638 -- # lcov --version 00:04:19.815 08:17:07 env -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:04:19.815 08:17:07 env -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:04:19.815 08:17:07 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.815 08:17:07 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.815 08:17:07 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.816 08:17:07 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.816 08:17:07 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.816 08:17:07 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.816 08:17:07 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.816 08:17:07 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.816 08:17:07 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.816 08:17:07 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.816 08:17:07 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.816 08:17:07 env -- scripts/common.sh@344 -- # case "$op" in 00:04:19.816 08:17:07 env -- scripts/common.sh@345 -- # : 1 00:04:19.816 08:17:07 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.816 08:17:07 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.816 08:17:07 env -- scripts/common.sh@365 -- # decimal 1 00:04:19.816 08:17:07 env -- scripts/common.sh@353 -- # local d=1 00:04:19.816 08:17:07 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.816 08:17:07 env -- scripts/common.sh@355 -- # echo 1 00:04:19.816 08:17:07 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.073 08:17:07 env -- scripts/common.sh@366 -- # decimal 2 00:04:20.073 08:17:07 env -- scripts/common.sh@353 -- # local d=2 00:04:20.073 08:17:07 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.073 08:17:07 env -- scripts/common.sh@355 -- # echo 2 00:04:20.073 08:17:07 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.073 08:17:07 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.073 08:17:07 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.073 08:17:07 env -- scripts/common.sh@368 -- # return 0 00:04:20.073 08:17:07 env -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.073 08:17:07 env -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:04:20.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.073 --rc genhtml_branch_coverage=1 00:04:20.073 --rc genhtml_function_coverage=1 00:04:20.073 --rc genhtml_legend=1 00:04:20.073 --rc geninfo_all_blocks=1 00:04:20.073 --rc geninfo_unexecuted_blocks=1 00:04:20.073 00:04:20.073 ' 00:04:20.073 08:17:07 env -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:04:20.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.073 --rc genhtml_branch_coverage=1 00:04:20.073 --rc genhtml_function_coverage=1 00:04:20.073 --rc genhtml_legend=1 00:04:20.073 --rc geninfo_all_blocks=1 00:04:20.073 --rc geninfo_unexecuted_blocks=1 00:04:20.073 00:04:20.073 ' 00:04:20.073 08:17:07 env -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:04:20.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.073 --rc genhtml_branch_coverage=1 00:04:20.073 --rc genhtml_function_coverage=1 00:04:20.073 --rc genhtml_legend=1 00:04:20.073 --rc geninfo_all_blocks=1 00:04:20.073 --rc geninfo_unexecuted_blocks=1 00:04:20.073 00:04:20.073 ' 00:04:20.073 08:17:07 env -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:04:20.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.073 --rc genhtml_branch_coverage=1 00:04:20.073 --rc genhtml_function_coverage=1 00:04:20.073 --rc genhtml_legend=1 00:04:20.073 --rc geninfo_all_blocks=1 00:04:20.073 --rc geninfo_unexecuted_blocks=1 00:04:20.073 00:04:20.073 ' 00:04:20.073 08:17:07 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:20.073 08:17:07 env -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:04:20.073 08:17:07 env -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:20.073 08:17:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.073 ************************************ 00:04:20.073 START TEST env_memory 00:04:20.073 ************************************ 00:04:20.073 08:17:07 env.env_memory -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:20.073 00:04:20.073 00:04:20.073 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.073 http://cunit.sourceforge.net/ 00:04:20.073 00:04:20.073 00:04:20.073 Suite: memory 00:04:20.073 Test: alloc and free memory map ...[2024-11-20 08:17:07.438292] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:20.073 passed 00:04:20.073 Test: mem map translation ...[2024-11-20 08:17:07.469880] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:20.073 [2024-11-20 08:17:07.470206] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:20.073 [2024-11-20 08:17:07.470417] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:20.073 [2024-11-20 08:17:07.470689] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:20.073 passed 00:04:20.073 Test: mem map registration ...[2024-11-20 08:17:07.535218] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:20.073 [2024-11-20 08:17:07.535527] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:20.073 passed 00:04:20.073 Test: mem map adjacent registrations ...passed 00:04:20.073 00:04:20.073 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.073 suites 1 1 n/a 0 0 00:04:20.073 tests 4 4 4 0 0 00:04:20.073 asserts 152 152 152 0 n/a 00:04:20.073 00:04:20.073 Elapsed time = 0.201 seconds 00:04:20.073 00:04:20.073 real 0m0.221s 00:04:20.073 user 0m0.203s 00:04:20.073 sys 0m0.012s 00:04:20.073 08:17:07 env.env_memory -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:20.073 ************************************ 00:04:20.073 END TEST env_memory 00:04:20.073 08:17:07 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:20.073 ************************************ 00:04:20.331 08:17:07 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:20.331 08:17:07 env -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:04:20.332 08:17:07 env -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:20.332 08:17:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.332 ************************************ 00:04:20.332 START TEST env_vtophys 00:04:20.332 ************************************ 00:04:20.332 08:17:07 env.env_vtophys -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:20.332 EAL: lib.eal log level changed from notice to debug 00:04:20.332 EAL: Detected lcore 0 as core 0 on socket 0 00:04:20.332 EAL: Detected lcore 1 as core 0 on socket 0 00:04:20.332 EAL: Detected lcore 2 as core 0 on socket 0 00:04:20.332 EAL: Detected lcore 3 as core 0 on socket 0 00:04:20.332 EAL: Detected lcore 4 as core 0 on socket 0 00:04:20.332 EAL: Detected lcore 5 as core 0 on socket 0 00:04:20.332 EAL: Detected lcore 6 as core 0 on socket 0 00:04:20.332 EAL: Detected lcore 7 as core 0 on socket 0 00:04:20.332 EAL: Detected lcore 8 as core 0 on socket 0 00:04:20.332 EAL: Detected lcore 9 as core 0 on socket 0 00:04:20.332 EAL: Maximum logical cores by configuration: 128 00:04:20.332 EAL: Detected CPU lcores: 10 00:04:20.332 EAL: Detected NUMA nodes: 1 00:04:20.332 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:20.332 EAL: Detected shared linkage of DPDK 00:04:20.332 EAL: No shared files mode enabled, IPC will be disabled 00:04:20.332 EAL: Selected IOVA mode 'PA' 00:04:20.332 EAL: Probing VFIO support... 00:04:20.332 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:20.332 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:20.332 EAL: Ask a virtual area of 0x2e000 bytes 00:04:20.332 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:20.332 EAL: Setting up physically contiguous memory... 00:04:20.332 EAL: Setting maximum number of open files to 524288 00:04:20.332 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:20.332 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:20.332 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.332 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:20.332 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.332 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.332 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:20.332 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:20.332 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.332 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:20.332 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.332 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.332 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:20.332 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:20.332 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.332 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:20.332 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.332 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.332 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:20.332 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:20.332 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.332 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:20.332 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.332 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.332 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:20.332 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:20.332 EAL: Hugepages will be freed exactly as allocated. 00:04:20.332 EAL: No shared files mode enabled, IPC is disabled 00:04:20.332 EAL: No shared files mode enabled, IPC is disabled 00:04:20.332 EAL: TSC frequency is ~2200000 KHz 00:04:20.332 EAL: Main lcore 0 is ready (tid=7f2b69eaca00;cpuset=[0]) 00:04:20.332 EAL: Trying to obtain current memory policy. 00:04:20.332 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.332 EAL: Restoring previous memory policy: 0 00:04:20.332 EAL: request: mp_malloc_sync 00:04:20.332 EAL: No shared files mode enabled, IPC is disabled 00:04:20.332 EAL: Heap on socket 0 was expanded by 2MB 00:04:20.332 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:20.332 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:20.332 EAL: Mem event callback 'spdk:(nil)' registered 00:04:20.332 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:20.332 00:04:20.332 00:04:20.332 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.332 http://cunit.sourceforge.net/ 00:04:20.332 00:04:20.332 00:04:20.332 Suite: components_suite 00:04:20.332 Test: vtophys_malloc_test ...passed 00:04:20.332 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:20.332 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.332 EAL: Restoring previous memory policy: 4 00:04:20.332 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.332 EAL: request: mp_malloc_sync 00:04:20.332 EAL: No shared files mode enabled, IPC is disabled 00:04:20.332 EAL: Heap on socket 0 was expanded by 4MB 00:04:20.332 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.332 EAL: request: mp_malloc_sync 00:04:20.332 EAL: No shared files mode enabled, IPC is disabled 00:04:20.332 EAL: Heap on socket 0 was shrunk by 4MB 00:04:20.332 EAL: Trying to obtain current memory policy. 00:04:20.332 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.332 EAL: Restoring previous memory policy: 4 00:04:20.332 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.332 EAL: request: mp_malloc_sync 00:04:20.332 EAL: No shared files mode enabled, IPC is disabled 00:04:20.332 EAL: Heap on socket 0 was expanded by 6MB 00:04:20.332 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.332 EAL: request: mp_malloc_sync 00:04:20.332 EAL: No shared files mode enabled, IPC is disabled 00:04:20.332 EAL: Heap on socket 0 was shrunk by 6MB 00:04:20.332 EAL: Trying to obtain current memory policy. 00:04:20.332 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.332 EAL: Restoring previous memory policy: 4 00:04:20.332 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.332 EAL: request: mp_malloc_sync 00:04:20.332 EAL: No shared files mode enabled, IPC is disabled 00:04:20.332 EAL: Heap on socket 0 was expanded by 10MB 00:04:20.332 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.332 EAL: request: mp_malloc_sync 00:04:20.332 EAL: No shared files mode enabled, IPC is disabled 00:04:20.332 EAL: Heap on socket 0 was shrunk by 10MB 00:04:20.332 EAL: Trying to obtain current memory policy. 00:04:20.332 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.332 EAL: Restoring previous memory policy: 4 00:04:20.332 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.332 EAL: request: mp_malloc_sync 00:04:20.332 EAL: No shared files mode enabled, IPC is disabled 00:04:20.332 EAL: Heap on socket 0 was expanded by 18MB 00:04:20.332 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.332 EAL: request: mp_malloc_sync 00:04:20.332 EAL: No shared files mode enabled, IPC is disabled 00:04:20.332 EAL: Heap on socket 0 was shrunk by 18MB 00:04:20.332 EAL: Trying to obtain current memory policy. 00:04:20.332 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.332 EAL: Restoring previous memory policy: 4 00:04:20.332 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.332 EAL: request: mp_malloc_sync 00:04:20.332 EAL: No shared files mode enabled, IPC is disabled 00:04:20.332 EAL: Heap on socket 0 was expanded by 34MB 00:04:20.332 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.332 EAL: request: mp_malloc_sync 00:04:20.332 EAL: No shared files mode enabled, IPC is disabled 00:04:20.332 EAL: Heap on socket 0 was shrunk by 34MB 00:04:20.332 EAL: Trying to obtain current memory policy. 00:04:20.332 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.332 EAL: Restoring previous memory policy: 4 00:04:20.332 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.332 EAL: request: mp_malloc_sync 00:04:20.332 EAL: No shared files mode enabled, IPC is disabled 00:04:20.332 EAL: Heap on socket 0 was expanded by 66MB 00:04:20.591 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.591 EAL: request: mp_malloc_sync 00:04:20.591 EAL: No shared files mode enabled, IPC is disabled 00:04:20.591 EAL: Heap on socket 0 was shrunk by 66MB 00:04:20.591 EAL: Trying to obtain current memory policy. 00:04:20.591 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.591 EAL: Restoring previous memory policy: 4 00:04:20.591 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.591 EAL: request: mp_malloc_sync 00:04:20.591 EAL: No shared files mode enabled, IPC is disabled 00:04:20.591 EAL: Heap on socket 0 was expanded by 130MB 00:04:20.591 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.591 EAL: request: mp_malloc_sync 00:04:20.591 EAL: No shared files mode enabled, IPC is disabled 00:04:20.591 EAL: Heap on socket 0 was shrunk by 130MB 00:04:20.591 EAL: Trying to obtain current memory policy. 00:04:20.591 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.591 EAL: Restoring previous memory policy: 4 00:04:20.591 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.591 EAL: request: mp_malloc_sync 00:04:20.591 EAL: No shared files mode enabled, IPC is disabled 00:04:20.591 EAL: Heap on socket 0 was expanded by 258MB 00:04:20.591 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.849 EAL: request: mp_malloc_sync 00:04:20.849 EAL: No shared files mode enabled, IPC is disabled 00:04:20.849 EAL: Heap on socket 0 was shrunk by 258MB 00:04:20.849 EAL: Trying to obtain current memory policy. 00:04:20.849 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.849 EAL: Restoring previous memory policy: 4 00:04:20.849 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.849 EAL: request: mp_malloc_sync 00:04:20.849 EAL: No shared files mode enabled, IPC is disabled 00:04:20.849 EAL: Heap on socket 0 was expanded by 514MB 00:04:20.849 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.108 EAL: request: mp_malloc_sync 00:04:21.108 EAL: No shared files mode enabled, IPC is disabled 00:04:21.108 EAL: Heap on socket 0 was shrunk by 514MB 00:04:21.108 EAL: Trying to obtain current memory policy. 00:04:21.108 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.366 EAL: Restoring previous memory policy: 4 00:04:21.366 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.366 EAL: request: mp_malloc_sync 00:04:21.366 EAL: No shared files mode enabled, IPC is disabled 00:04:21.366 EAL: Heap on socket 0 was expanded by 1026MB 00:04:21.624 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.624 passed 00:04:21.624 00:04:21.624 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.624 suites 1 1 n/a 0 0 00:04:21.624 tests 2 2 2 0 0 00:04:21.624 asserts 3524 3524 3524 0 n/a 00:04:21.624 00:04:21.624 Elapsed time = 1.277 seconds 00:04:21.624 EAL: request: mp_malloc_sync 00:04:21.624 EAL: No shared files mode enabled, IPC is disabled 00:04:21.624 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:21.624 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.624 EAL: request: mp_malloc_sync 00:04:21.624 EAL: No shared files mode enabled, IPC is disabled 00:04:21.624 EAL: Heap on socket 0 was shrunk by 2MB 00:04:21.624 EAL: No shared files mode enabled, IPC is disabled 00:04:21.624 EAL: No shared files mode enabled, IPC is disabled 00:04:21.624 EAL: No shared files mode enabled, IPC is disabled 00:04:21.624 ************************************ 00:04:21.624 END TEST env_vtophys 00:04:21.624 ************************************ 00:04:21.624 00:04:21.624 real 0m1.498s 00:04:21.624 user 0m0.811s 00:04:21.624 sys 0m0.541s 00:04:21.624 08:17:09 env.env_vtophys -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:21.624 08:17:09 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:21.881 08:17:09 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:21.881 08:17:09 env -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:04:21.881 08:17:09 env -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:21.881 08:17:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.881 ************************************ 00:04:21.881 START TEST env_pci 00:04:21.881 ************************************ 00:04:21.881 08:17:09 env.env_pci -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:21.881 00:04:21.881 00:04:21.881 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.881 http://cunit.sourceforge.net/ 00:04:21.881 00:04:21.881 00:04:21.881 Suite: pci 00:04:21.881 Test: pci_hook ...[2024-11-20 08:17:09.225741] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56301 has claimed it 00:04:21.881 passed 00:04:21.881 00:04:21.881 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.881 suites 1 1 n/a 0 0 00:04:21.881 tests 1 1 1 0 0 00:04:21.881 asserts 25 25 25 0 n/a 00:04:21.881 00:04:21.881 Elapsed time = 0.002 seconds 00:04:21.881 EAL: Cannot find device (10000:00:01.0) 00:04:21.881 EAL: Failed to attach device on primary process 00:04:21.881 ************************************ 00:04:21.881 END TEST env_pci 00:04:21.881 ************************************ 00:04:21.881 00:04:21.881 real 0m0.022s 00:04:21.881 user 0m0.010s 00:04:21.881 sys 0m0.012s 00:04:21.881 08:17:09 env.env_pci -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:21.881 08:17:09 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:21.881 08:17:09 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:21.881 08:17:09 env -- env/env.sh@15 -- # uname 00:04:21.881 08:17:09 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:21.881 08:17:09 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:21.881 08:17:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:21.881 08:17:09 env -- common/autotest_common.sh@1108 -- # '[' 5 -le 1 ']' 00:04:21.881 08:17:09 env -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:21.881 08:17:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.881 ************************************ 00:04:21.881 START TEST env_dpdk_post_init 00:04:21.881 ************************************ 00:04:21.881 08:17:09 env.env_dpdk_post_init -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:21.881 EAL: Detected CPU lcores: 10 00:04:21.881 EAL: Detected NUMA nodes: 1 00:04:21.881 EAL: Detected shared linkage of DPDK 00:04:21.881 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:21.881 EAL: Selected IOVA mode 'PA' 00:04:21.881 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:22.139 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:22.139 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:22.139 Starting DPDK initialization... 00:04:22.139 Starting SPDK post initialization... 00:04:22.139 SPDK NVMe probe 00:04:22.139 Attaching to 0000:00:10.0 00:04:22.139 Attaching to 0000:00:11.0 00:04:22.139 Attached to 0000:00:10.0 00:04:22.139 Attached to 0000:00:11.0 00:04:22.139 Cleaning up... 00:04:22.139 00:04:22.139 real 0m0.195s 00:04:22.139 user 0m0.054s 00:04:22.139 sys 0m0.041s 00:04:22.139 08:17:09 env.env_dpdk_post_init -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:22.139 ************************************ 00:04:22.139 08:17:09 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:22.139 END TEST env_dpdk_post_init 00:04:22.139 ************************************ 00:04:22.139 08:17:09 env -- env/env.sh@26 -- # uname 00:04:22.139 08:17:09 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:22.139 08:17:09 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:22.139 08:17:09 env -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:04:22.139 08:17:09 env -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:22.139 08:17:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.139 ************************************ 00:04:22.139 START TEST env_mem_callbacks 00:04:22.139 ************************************ 00:04:22.139 08:17:09 env.env_mem_callbacks -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:22.139 EAL: Detected CPU lcores: 10 00:04:22.139 EAL: Detected NUMA nodes: 1 00:04:22.139 EAL: Detected shared linkage of DPDK 00:04:22.139 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:22.139 EAL: Selected IOVA mode 'PA' 00:04:22.139 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:22.139 00:04:22.139 00:04:22.139 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.139 http://cunit.sourceforge.net/ 00:04:22.139 00:04:22.139 00:04:22.139 Suite: memory 00:04:22.139 Test: test ... 00:04:22.139 register 0x200000200000 2097152 00:04:22.139 malloc 3145728 00:04:22.139 register 0x200000400000 4194304 00:04:22.139 buf 0x200000500000 len 3145728 PASSED 00:04:22.139 malloc 64 00:04:22.139 buf 0x2000004fff40 len 64 PASSED 00:04:22.139 malloc 4194304 00:04:22.139 register 0x200000800000 6291456 00:04:22.139 buf 0x200000a00000 len 4194304 PASSED 00:04:22.139 free 0x200000500000 3145728 00:04:22.139 free 0x2000004fff40 64 00:04:22.139 unregister 0x200000400000 4194304 PASSED 00:04:22.139 free 0x200000a00000 4194304 00:04:22.139 unregister 0x200000800000 6291456 PASSED 00:04:22.139 malloc 8388608 00:04:22.139 register 0x200000400000 10485760 00:04:22.139 buf 0x200000600000 len 8388608 PASSED 00:04:22.139 free 0x200000600000 8388608 00:04:22.139 unregister 0x200000400000 10485760 PASSED 00:04:22.139 passed 00:04:22.139 00:04:22.139 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.139 suites 1 1 n/a 0 0 00:04:22.139 tests 1 1 1 0 0 00:04:22.139 asserts 15 15 15 0 n/a 00:04:22.139 00:04:22.139 Elapsed time = 0.008 seconds 00:04:22.139 00:04:22.139 real 0m0.144s 00:04:22.139 user 0m0.018s 00:04:22.139 sys 0m0.023s 00:04:22.139 ************************************ 00:04:22.139 END TEST env_mem_callbacks 00:04:22.139 ************************************ 00:04:22.139 08:17:09 env.env_mem_callbacks -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:22.139 08:17:09 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:22.397 ************************************ 00:04:22.397 END TEST env 00:04:22.397 ************************************ 00:04:22.397 00:04:22.397 real 0m2.572s 00:04:22.397 user 0m1.308s 00:04:22.397 sys 0m0.896s 00:04:22.397 08:17:09 env -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:22.397 08:17:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.397 08:17:09 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:22.397 08:17:09 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:04:22.397 08:17:09 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:22.397 08:17:09 -- common/autotest_common.sh@10 -- # set +x 00:04:22.397 ************************************ 00:04:22.397 START TEST rpc 00:04:22.397 ************************************ 00:04:22.397 08:17:09 rpc -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:22.397 * Looking for test storage... 00:04:22.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:22.397 08:17:09 rpc -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:04:22.397 08:17:09 rpc -- common/autotest_common.sh@1638 -- # lcov --version 00:04:22.397 08:17:09 rpc -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:04:22.655 08:17:10 rpc -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:04:22.655 08:17:10 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.655 08:17:10 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.655 08:17:10 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.655 08:17:10 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.655 08:17:10 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.655 08:17:10 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.655 08:17:10 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.655 08:17:10 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.655 08:17:10 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.655 08:17:10 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.655 08:17:10 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.655 08:17:10 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:22.655 08:17:10 rpc -- scripts/common.sh@345 -- # : 1 00:04:22.655 08:17:10 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.655 08:17:10 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.655 08:17:10 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:22.655 08:17:10 rpc -- scripts/common.sh@353 -- # local d=1 00:04:22.655 08:17:10 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.655 08:17:10 rpc -- scripts/common.sh@355 -- # echo 1 00:04:22.655 08:17:10 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.655 08:17:10 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:22.655 08:17:10 rpc -- scripts/common.sh@353 -- # local d=2 00:04:22.655 08:17:10 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.655 08:17:10 rpc -- scripts/common.sh@355 -- # echo 2 00:04:22.655 08:17:10 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.655 08:17:10 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.655 08:17:10 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.655 08:17:10 rpc -- scripts/common.sh@368 -- # return 0 00:04:22.655 08:17:10 rpc -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.655 08:17:10 rpc -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:04:22.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.655 --rc genhtml_branch_coverage=1 00:04:22.655 --rc genhtml_function_coverage=1 00:04:22.655 --rc genhtml_legend=1 00:04:22.655 --rc geninfo_all_blocks=1 00:04:22.655 --rc geninfo_unexecuted_blocks=1 00:04:22.655 00:04:22.655 ' 00:04:22.655 08:17:10 rpc -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:04:22.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.655 --rc genhtml_branch_coverage=1 00:04:22.655 --rc genhtml_function_coverage=1 00:04:22.655 --rc genhtml_legend=1 00:04:22.655 --rc geninfo_all_blocks=1 00:04:22.655 --rc geninfo_unexecuted_blocks=1 00:04:22.655 00:04:22.655 ' 00:04:22.655 08:17:10 rpc -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:04:22.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.655 --rc genhtml_branch_coverage=1 00:04:22.655 --rc genhtml_function_coverage=1 00:04:22.655 --rc genhtml_legend=1 00:04:22.655 --rc geninfo_all_blocks=1 00:04:22.655 --rc geninfo_unexecuted_blocks=1 00:04:22.655 00:04:22.655 ' 00:04:22.655 08:17:10 rpc -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:04:22.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.655 --rc genhtml_branch_coverage=1 00:04:22.655 --rc genhtml_function_coverage=1 00:04:22.655 --rc genhtml_legend=1 00:04:22.655 --rc geninfo_all_blocks=1 00:04:22.655 --rc geninfo_unexecuted_blocks=1 00:04:22.655 00:04:22.655 ' 00:04:22.655 08:17:10 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56425 00:04:22.655 08:17:10 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:22.655 08:17:10 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.655 08:17:10 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56425 00:04:22.655 08:17:10 rpc -- common/autotest_common.sh@838 -- # '[' -z 56425 ']' 00:04:22.655 08:17:10 rpc -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.655 08:17:10 rpc -- common/autotest_common.sh@843 -- # local max_retries=100 00:04:22.655 08:17:10 rpc -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.655 08:17:10 rpc -- common/autotest_common.sh@847 -- # xtrace_disable 00:04:22.655 08:17:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.656 [2024-11-20 08:17:10.111692] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:04:22.656 [2024-11-20 08:17:10.111830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56425 ] 00:04:22.914 [2024-11-20 08:17:10.253969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.914 [2024-11-20 08:17:10.316372] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:22.914 [2024-11-20 08:17:10.316433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56425' to capture a snapshot of events at runtime. 00:04:22.914 [2024-11-20 08:17:10.316445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:22.914 [2024-11-20 08:17:10.316454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:22.914 [2024-11-20 08:17:10.316461] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56425 for offline analysis/debug. 00:04:22.914 [2024-11-20 08:17:10.316940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.914 [2024-11-20 08:17:10.391025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:23.849 08:17:11 rpc -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:04:23.849 08:17:11 rpc -- common/autotest_common.sh@871 -- # return 0 00:04:23.849 08:17:11 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:23.849 08:17:11 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:23.849 08:17:11 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:23.849 08:17:11 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:23.849 08:17:11 rpc -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:04:23.849 08:17:11 rpc -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:23.849 08:17:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.849 ************************************ 00:04:23.849 START TEST rpc_integrity 00:04:23.849 ************************************ 00:04:23.849 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@1132 -- # rpc_integrity 00:04:23.849 08:17:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:23.849 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:23.849 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.849 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:23.849 08:17:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:23.849 08:17:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:23.849 08:17:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:23.849 08:17:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:23.849 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:23.849 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.849 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:23.849 08:17:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:23.849 08:17:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:23.849 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:23.849 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.849 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:23.849 08:17:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:23.849 { 00:04:23.849 "name": "Malloc0", 00:04:23.849 "aliases": [ 00:04:23.849 "e491e571-2e81-4731-8ff2-26f9aae307c1" 00:04:23.849 ], 00:04:23.849 "product_name": "Malloc disk", 00:04:23.849 "block_size": 512, 00:04:23.849 "num_blocks": 16384, 00:04:23.849 "uuid": "e491e571-2e81-4731-8ff2-26f9aae307c1", 00:04:23.849 "assigned_rate_limits": { 00:04:23.849 "rw_ios_per_sec": 0, 00:04:23.849 "rw_mbytes_per_sec": 0, 00:04:23.849 "r_mbytes_per_sec": 0, 00:04:23.849 "w_mbytes_per_sec": 0 00:04:23.849 }, 00:04:23.849 "claimed": false, 00:04:23.849 "zoned": false, 00:04:23.849 "supported_io_types": { 00:04:23.849 "read": true, 00:04:23.849 "write": true, 00:04:23.849 "unmap": true, 00:04:23.849 "flush": true, 00:04:23.849 "reset": true, 00:04:23.849 "nvme_admin": false, 00:04:23.849 "nvme_io": false, 00:04:23.849 "nvme_io_md": false, 00:04:23.849 "write_zeroes": true, 00:04:23.849 "zcopy": true, 00:04:23.849 "get_zone_info": false, 00:04:23.849 "zone_management": false, 00:04:23.849 "zone_append": false, 00:04:23.849 "compare": false, 00:04:23.849 "compare_and_write": false, 00:04:23.849 "abort": true, 00:04:23.849 "seek_hole": false, 00:04:23.849 "seek_data": false, 00:04:23.849 "copy": true, 00:04:23.849 "nvme_iov_md": false 00:04:23.849 }, 00:04:23.849 "memory_domains": [ 00:04:23.849 { 00:04:23.849 "dma_device_id": "system", 00:04:23.849 "dma_device_type": 1 00:04:23.849 }, 00:04:23.849 { 00:04:23.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.849 "dma_device_type": 2 00:04:23.849 } 00:04:23.849 ], 00:04:23.849 "driver_specific": {} 00:04:23.849 } 00:04:23.849 ]' 00:04:23.849 08:17:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:23.849 08:17:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:23.849 08:17:11 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:23.850 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:23.850 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.850 [2024-11-20 08:17:11.313288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:23.850 [2024-11-20 08:17:11.313349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:23.850 [2024-11-20 08:17:11.313371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x986f20 00:04:23.850 [2024-11-20 08:17:11.313381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:23.850 [2024-11-20 08:17:11.315171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:23.850 [2024-11-20 08:17:11.315209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:23.850 Passthru0 00:04:23.850 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:23.850 08:17:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:23.850 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:23.850 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.850 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:23.850 08:17:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:23.850 { 00:04:23.850 "name": "Malloc0", 00:04:23.850 "aliases": [ 00:04:23.850 "e491e571-2e81-4731-8ff2-26f9aae307c1" 00:04:23.850 ], 00:04:23.850 "product_name": "Malloc disk", 00:04:23.850 "block_size": 512, 00:04:23.850 "num_blocks": 16384, 00:04:23.850 "uuid": "e491e571-2e81-4731-8ff2-26f9aae307c1", 00:04:23.850 "assigned_rate_limits": { 00:04:23.850 "rw_ios_per_sec": 0, 00:04:23.850 "rw_mbytes_per_sec": 0, 00:04:23.850 "r_mbytes_per_sec": 0, 00:04:23.850 "w_mbytes_per_sec": 0 00:04:23.850 }, 00:04:23.850 "claimed": true, 00:04:23.850 "claim_type": "exclusive_write", 00:04:23.850 "zoned": false, 00:04:23.850 "supported_io_types": { 00:04:23.850 "read": true, 00:04:23.850 "write": true, 00:04:23.850 "unmap": true, 00:04:23.850 "flush": true, 00:04:23.850 "reset": true, 00:04:23.850 "nvme_admin": false, 00:04:23.850 "nvme_io": false, 00:04:23.850 "nvme_io_md": false, 00:04:23.850 "write_zeroes": true, 00:04:23.850 "zcopy": true, 00:04:23.850 "get_zone_info": false, 00:04:23.850 "zone_management": false, 00:04:23.850 "zone_append": false, 00:04:23.850 "compare": false, 00:04:23.850 "compare_and_write": false, 00:04:23.850 "abort": true, 00:04:23.850 "seek_hole": false, 00:04:23.850 "seek_data": false, 00:04:23.850 "copy": true, 00:04:23.850 "nvme_iov_md": false 00:04:23.850 }, 00:04:23.850 "memory_domains": [ 00:04:23.850 { 00:04:23.850 "dma_device_id": "system", 00:04:23.850 "dma_device_type": 1 00:04:23.850 }, 00:04:23.850 { 00:04:23.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.850 "dma_device_type": 2 00:04:23.850 } 00:04:23.850 ], 00:04:23.850 "driver_specific": {} 00:04:23.850 }, 00:04:23.850 { 00:04:23.850 "name": "Passthru0", 00:04:23.850 "aliases": [ 00:04:23.850 "e5de523a-e091-56d8-ab28-66decc75392b" 00:04:23.850 ], 00:04:23.850 "product_name": "passthru", 00:04:23.850 "block_size": 512, 00:04:23.850 "num_blocks": 16384, 00:04:23.850 "uuid": "e5de523a-e091-56d8-ab28-66decc75392b", 00:04:23.850 "assigned_rate_limits": { 00:04:23.850 "rw_ios_per_sec": 0, 00:04:23.850 "rw_mbytes_per_sec": 0, 00:04:23.850 "r_mbytes_per_sec": 0, 00:04:23.850 "w_mbytes_per_sec": 0 00:04:23.850 }, 00:04:23.850 "claimed": false, 00:04:23.850 "zoned": false, 00:04:23.850 "supported_io_types": { 00:04:23.850 "read": true, 00:04:23.850 "write": true, 00:04:23.850 "unmap": true, 00:04:23.850 "flush": true, 00:04:23.850 "reset": true, 00:04:23.850 "nvme_admin": false, 00:04:23.850 "nvme_io": false, 00:04:23.850 "nvme_io_md": false, 00:04:23.850 "write_zeroes": true, 00:04:23.850 "zcopy": true, 00:04:23.850 "get_zone_info": false, 00:04:23.850 "zone_management": false, 00:04:23.850 "zone_append": false, 00:04:23.850 "compare": false, 00:04:23.850 "compare_and_write": false, 00:04:23.850 "abort": true, 00:04:23.850 "seek_hole": false, 00:04:23.850 "seek_data": false, 00:04:23.850 "copy": true, 00:04:23.850 "nvme_iov_md": false 00:04:23.850 }, 00:04:23.850 "memory_domains": [ 00:04:23.850 { 00:04:23.850 "dma_device_id": "system", 00:04:23.850 "dma_device_type": 1 00:04:23.850 }, 00:04:23.850 { 00:04:23.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.850 "dma_device_type": 2 00:04:23.850 } 00:04:23.850 ], 00:04:23.850 "driver_specific": { 00:04:23.850 "passthru": { 00:04:23.850 "name": "Passthru0", 00:04:23.850 "base_bdev_name": "Malloc0" 00:04:23.850 } 00:04:23.850 } 00:04:23.850 } 00:04:23.850 ]' 00:04:23.850 08:17:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:23.850 08:17:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:23.850 08:17:11 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:23.850 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:23.850 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.108 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:24.108 08:17:11 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:24.108 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:24.108 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.108 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:24.108 08:17:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:24.108 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:24.108 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.108 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:24.108 08:17:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:24.108 08:17:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:24.108 08:17:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:24.108 00:04:24.108 real 0m0.328s 00:04:24.108 user 0m0.215s 00:04:24.108 sys 0m0.043s 00:04:24.108 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:24.108 08:17:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.109 ************************************ 00:04:24.109 END TEST rpc_integrity 00:04:24.109 ************************************ 00:04:24.109 08:17:11 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:24.109 08:17:11 rpc -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:04:24.109 08:17:11 rpc -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:24.109 08:17:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.109 ************************************ 00:04:24.109 START TEST rpc_plugins 00:04:24.109 ************************************ 00:04:24.109 08:17:11 rpc.rpc_plugins -- common/autotest_common.sh@1132 -- # rpc_plugins 00:04:24.109 08:17:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:24.109 08:17:11 rpc.rpc_plugins -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:24.109 08:17:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.109 08:17:11 rpc.rpc_plugins -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:24.109 08:17:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:24.109 08:17:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:24.109 08:17:11 rpc.rpc_plugins -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:24.109 08:17:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.109 08:17:11 rpc.rpc_plugins -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:24.109 08:17:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:24.109 { 00:04:24.109 "name": "Malloc1", 00:04:24.109 "aliases": [ 00:04:24.109 "462757c3-0c60-44b6-84d7-22425b0065a7" 00:04:24.109 ], 00:04:24.109 "product_name": "Malloc disk", 00:04:24.109 "block_size": 4096, 00:04:24.109 "num_blocks": 256, 00:04:24.109 "uuid": "462757c3-0c60-44b6-84d7-22425b0065a7", 00:04:24.109 "assigned_rate_limits": { 00:04:24.109 "rw_ios_per_sec": 0, 00:04:24.109 "rw_mbytes_per_sec": 0, 00:04:24.109 "r_mbytes_per_sec": 0, 00:04:24.109 "w_mbytes_per_sec": 0 00:04:24.109 }, 00:04:24.109 "claimed": false, 00:04:24.109 "zoned": false, 00:04:24.109 "supported_io_types": { 00:04:24.109 "read": true, 00:04:24.109 "write": true, 00:04:24.109 "unmap": true, 00:04:24.109 "flush": true, 00:04:24.109 "reset": true, 00:04:24.109 "nvme_admin": false, 00:04:24.109 "nvme_io": false, 00:04:24.109 "nvme_io_md": false, 00:04:24.109 "write_zeroes": true, 00:04:24.109 "zcopy": true, 00:04:24.109 "get_zone_info": false, 00:04:24.109 "zone_management": false, 00:04:24.109 "zone_append": false, 00:04:24.109 "compare": false, 00:04:24.109 "compare_and_write": false, 00:04:24.109 "abort": true, 00:04:24.109 "seek_hole": false, 00:04:24.109 "seek_data": false, 00:04:24.109 "copy": true, 00:04:24.109 "nvme_iov_md": false 00:04:24.109 }, 00:04:24.109 "memory_domains": [ 00:04:24.109 { 00:04:24.109 "dma_device_id": "system", 00:04:24.109 "dma_device_type": 1 00:04:24.109 }, 00:04:24.109 { 00:04:24.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.109 "dma_device_type": 2 00:04:24.109 } 00:04:24.109 ], 00:04:24.109 "driver_specific": {} 00:04:24.109 } 00:04:24.109 ]' 00:04:24.109 08:17:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:24.109 08:17:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:24.109 08:17:11 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:24.109 08:17:11 rpc.rpc_plugins -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:24.109 08:17:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.109 08:17:11 rpc.rpc_plugins -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:24.109 08:17:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:24.109 08:17:11 rpc.rpc_plugins -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:24.109 08:17:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.109 08:17:11 rpc.rpc_plugins -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:24.109 08:17:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:24.109 08:17:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:24.367 08:17:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:24.367 00:04:24.367 real 0m0.169s 00:04:24.367 user 0m0.110s 00:04:24.367 sys 0m0.023s 00:04:24.367 ************************************ 00:04:24.367 END TEST rpc_plugins 00:04:24.367 ************************************ 00:04:24.368 08:17:11 rpc.rpc_plugins -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:24.368 08:17:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.368 08:17:11 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:24.368 08:17:11 rpc -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:04:24.368 08:17:11 rpc -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:24.368 08:17:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.368 ************************************ 00:04:24.368 START TEST rpc_trace_cmd_test 00:04:24.368 ************************************ 00:04:24.368 08:17:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1132 -- # rpc_trace_cmd_test 00:04:24.368 08:17:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:24.368 08:17:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:24.368 08:17:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:24.368 08:17:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:24.368 08:17:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:24.368 08:17:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:24.368 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56425", 00:04:24.368 "tpoint_group_mask": "0x8", 00:04:24.368 "iscsi_conn": { 00:04:24.368 "mask": "0x2", 00:04:24.368 "tpoint_mask": "0x0" 00:04:24.368 }, 00:04:24.368 "scsi": { 00:04:24.368 "mask": "0x4", 00:04:24.368 "tpoint_mask": "0x0" 00:04:24.368 }, 00:04:24.368 "bdev": { 00:04:24.368 "mask": "0x8", 00:04:24.368 "tpoint_mask": "0xffffffffffffffff" 00:04:24.368 }, 00:04:24.368 "nvmf_rdma": { 00:04:24.368 "mask": "0x10", 00:04:24.368 "tpoint_mask": "0x0" 00:04:24.368 }, 00:04:24.368 "nvmf_tcp": { 00:04:24.368 "mask": "0x20", 00:04:24.368 "tpoint_mask": "0x0" 00:04:24.368 }, 00:04:24.368 "ftl": { 00:04:24.368 "mask": "0x40", 00:04:24.368 "tpoint_mask": "0x0" 00:04:24.368 }, 00:04:24.368 "blobfs": { 00:04:24.368 "mask": "0x80", 00:04:24.368 "tpoint_mask": "0x0" 00:04:24.368 }, 00:04:24.368 "dsa": { 00:04:24.368 "mask": "0x200", 00:04:24.368 "tpoint_mask": "0x0" 00:04:24.368 }, 00:04:24.368 "thread": { 00:04:24.368 "mask": "0x400", 00:04:24.368 "tpoint_mask": "0x0" 00:04:24.368 }, 00:04:24.368 "nvme_pcie": { 00:04:24.368 "mask": "0x800", 00:04:24.368 "tpoint_mask": "0x0" 00:04:24.368 }, 00:04:24.368 "iaa": { 00:04:24.368 "mask": "0x1000", 00:04:24.368 "tpoint_mask": "0x0" 00:04:24.368 }, 00:04:24.368 "nvme_tcp": { 00:04:24.368 "mask": "0x2000", 00:04:24.368 "tpoint_mask": "0x0" 00:04:24.368 }, 00:04:24.368 "bdev_nvme": { 00:04:24.368 "mask": "0x4000", 00:04:24.368 "tpoint_mask": "0x0" 00:04:24.368 }, 00:04:24.368 "sock": { 00:04:24.368 "mask": "0x8000", 00:04:24.368 "tpoint_mask": "0x0" 00:04:24.368 }, 00:04:24.368 "blob": { 00:04:24.368 "mask": "0x10000", 00:04:24.368 "tpoint_mask": "0x0" 00:04:24.368 }, 00:04:24.368 "bdev_raid": { 00:04:24.368 "mask": "0x20000", 00:04:24.368 "tpoint_mask": "0x0" 00:04:24.368 }, 00:04:24.368 "scheduler": { 00:04:24.368 "mask": "0x40000", 00:04:24.368 "tpoint_mask": "0x0" 00:04:24.368 } 00:04:24.368 }' 00:04:24.368 08:17:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:24.368 08:17:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:24.368 08:17:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:24.368 08:17:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:24.368 08:17:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:24.368 08:17:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:24.368 08:17:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:24.626 08:17:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:24.626 08:17:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:24.626 08:17:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:24.626 00:04:24.626 real 0m0.274s 00:04:24.626 user 0m0.234s 00:04:24.626 sys 0m0.031s 00:04:24.626 ************************************ 00:04:24.626 END TEST rpc_trace_cmd_test 00:04:24.626 ************************************ 00:04:24.626 08:17:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:24.626 08:17:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:24.626 08:17:12 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:24.626 08:17:12 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:24.626 08:17:12 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:24.626 08:17:12 rpc -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:04:24.626 08:17:12 rpc -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:24.626 08:17:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.626 ************************************ 00:04:24.626 START TEST rpc_daemon_integrity 00:04:24.626 ************************************ 00:04:24.626 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1132 -- # rpc_integrity 00:04:24.626 08:17:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:24.626 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:24.626 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.626 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:24.626 08:17:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:24.626 08:17:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:24.626 08:17:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:24.626 08:17:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:24.626 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:24.626 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.626 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:24.626 08:17:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:24.626 08:17:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:24.626 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:24.626 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.626 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:24.626 08:17:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:24.626 { 00:04:24.626 "name": "Malloc2", 00:04:24.626 "aliases": [ 00:04:24.626 "f7e3a979-337d-4359-8ec5-148f6baa8f44" 00:04:24.626 ], 00:04:24.626 "product_name": "Malloc disk", 00:04:24.626 "block_size": 512, 00:04:24.626 "num_blocks": 16384, 00:04:24.626 "uuid": "f7e3a979-337d-4359-8ec5-148f6baa8f44", 00:04:24.626 "assigned_rate_limits": { 00:04:24.626 "rw_ios_per_sec": 0, 00:04:24.626 "rw_mbytes_per_sec": 0, 00:04:24.626 "r_mbytes_per_sec": 0, 00:04:24.626 "w_mbytes_per_sec": 0 00:04:24.626 }, 00:04:24.626 "claimed": false, 00:04:24.626 "zoned": false, 00:04:24.626 "supported_io_types": { 00:04:24.626 "read": true, 00:04:24.626 "write": true, 00:04:24.626 "unmap": true, 00:04:24.626 "flush": true, 00:04:24.626 "reset": true, 00:04:24.626 "nvme_admin": false, 00:04:24.626 "nvme_io": false, 00:04:24.626 "nvme_io_md": false, 00:04:24.626 "write_zeroes": true, 00:04:24.626 "zcopy": true, 00:04:24.626 "get_zone_info": false, 00:04:24.626 "zone_management": false, 00:04:24.626 "zone_append": false, 00:04:24.626 "compare": false, 00:04:24.626 "compare_and_write": false, 00:04:24.626 "abort": true, 00:04:24.626 "seek_hole": false, 00:04:24.626 "seek_data": false, 00:04:24.626 "copy": true, 00:04:24.626 "nvme_iov_md": false 00:04:24.626 }, 00:04:24.626 "memory_domains": [ 00:04:24.626 { 00:04:24.626 "dma_device_id": "system", 00:04:24.626 "dma_device_type": 1 00:04:24.626 }, 00:04:24.626 { 00:04:24.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.626 "dma_device_type": 2 00:04:24.626 } 00:04:24.626 ], 00:04:24.626 "driver_specific": {} 00:04:24.626 } 00:04:24.626 ]' 00:04:24.626 08:17:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:24.933 08:17:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:24.933 08:17:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:24.933 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:24.933 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.933 [2024-11-20 08:17:12.222190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:24.933 [2024-11-20 08:17:12.222248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:24.933 [2024-11-20 08:17:12.222269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa7a790 00:04:24.933 [2024-11-20 08:17:12.222279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:24.933 [2024-11-20 08:17:12.224185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:24.933 [2024-11-20 08:17:12.224221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:24.933 Passthru0 00:04:24.933 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:24.933 08:17:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:24.933 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:24.933 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.933 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:24.933 08:17:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:24.933 { 00:04:24.933 "name": "Malloc2", 00:04:24.933 "aliases": [ 00:04:24.933 "f7e3a979-337d-4359-8ec5-148f6baa8f44" 00:04:24.933 ], 00:04:24.933 "product_name": "Malloc disk", 00:04:24.933 "block_size": 512, 00:04:24.933 "num_blocks": 16384, 00:04:24.933 "uuid": "f7e3a979-337d-4359-8ec5-148f6baa8f44", 00:04:24.933 "assigned_rate_limits": { 00:04:24.933 "rw_ios_per_sec": 0, 00:04:24.933 "rw_mbytes_per_sec": 0, 00:04:24.933 "r_mbytes_per_sec": 0, 00:04:24.933 "w_mbytes_per_sec": 0 00:04:24.933 }, 00:04:24.933 "claimed": true, 00:04:24.933 "claim_type": "exclusive_write", 00:04:24.933 "zoned": false, 00:04:24.933 "supported_io_types": { 00:04:24.933 "read": true, 00:04:24.933 "write": true, 00:04:24.933 "unmap": true, 00:04:24.933 "flush": true, 00:04:24.934 "reset": true, 00:04:24.934 "nvme_admin": false, 00:04:24.934 "nvme_io": false, 00:04:24.934 "nvme_io_md": false, 00:04:24.934 "write_zeroes": true, 00:04:24.934 "zcopy": true, 00:04:24.934 "get_zone_info": false, 00:04:24.934 "zone_management": false, 00:04:24.934 "zone_append": false, 00:04:24.934 "compare": false, 00:04:24.934 "compare_and_write": false, 00:04:24.934 "abort": true, 00:04:24.934 "seek_hole": false, 00:04:24.934 "seek_data": false, 00:04:24.934 "copy": true, 00:04:24.934 "nvme_iov_md": false 00:04:24.934 }, 00:04:24.934 "memory_domains": [ 00:04:24.934 { 00:04:24.934 "dma_device_id": "system", 00:04:24.934 "dma_device_type": 1 00:04:24.934 }, 00:04:24.934 { 00:04:24.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.934 "dma_device_type": 2 00:04:24.934 } 00:04:24.934 ], 00:04:24.934 "driver_specific": {} 00:04:24.934 }, 00:04:24.934 { 00:04:24.934 "name": "Passthru0", 00:04:24.934 "aliases": [ 00:04:24.934 "7bbf2ecf-ea03-5758-b33a-38bfe2f81517" 00:04:24.934 ], 00:04:24.934 "product_name": "passthru", 00:04:24.934 "block_size": 512, 00:04:24.934 "num_blocks": 16384, 00:04:24.934 "uuid": "7bbf2ecf-ea03-5758-b33a-38bfe2f81517", 00:04:24.934 "assigned_rate_limits": { 00:04:24.934 "rw_ios_per_sec": 0, 00:04:24.934 "rw_mbytes_per_sec": 0, 00:04:24.934 "r_mbytes_per_sec": 0, 00:04:24.934 "w_mbytes_per_sec": 0 00:04:24.934 }, 00:04:24.934 "claimed": false, 00:04:24.934 "zoned": false, 00:04:24.934 "supported_io_types": { 00:04:24.934 "read": true, 00:04:24.934 "write": true, 00:04:24.934 "unmap": true, 00:04:24.934 "flush": true, 00:04:24.934 "reset": true, 00:04:24.934 "nvme_admin": false, 00:04:24.934 "nvme_io": false, 00:04:24.934 "nvme_io_md": false, 00:04:24.934 "write_zeroes": true, 00:04:24.934 "zcopy": true, 00:04:24.934 "get_zone_info": false, 00:04:24.934 "zone_management": false, 00:04:24.934 "zone_append": false, 00:04:24.934 "compare": false, 00:04:24.934 "compare_and_write": false, 00:04:24.934 "abort": true, 00:04:24.934 "seek_hole": false, 00:04:24.934 "seek_data": false, 00:04:24.934 "copy": true, 00:04:24.934 "nvme_iov_md": false 00:04:24.934 }, 00:04:24.934 "memory_domains": [ 00:04:24.934 { 00:04:24.934 "dma_device_id": "system", 00:04:24.934 "dma_device_type": 1 00:04:24.934 }, 00:04:24.934 { 00:04:24.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.934 "dma_device_type": 2 00:04:24.934 } 00:04:24.934 ], 00:04:24.934 "driver_specific": { 00:04:24.934 "passthru": { 00:04:24.934 "name": "Passthru0", 00:04:24.934 "base_bdev_name": "Malloc2" 00:04:24.934 } 00:04:24.934 } 00:04:24.934 } 00:04:24.934 ]' 00:04:24.934 08:17:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:24.934 08:17:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:24.934 08:17:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:24.934 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:24.934 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.934 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:24.934 08:17:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:24.934 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:24.934 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.934 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:24.934 08:17:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:24.934 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:24.934 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.934 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:24.934 08:17:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:24.934 08:17:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:24.934 08:17:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:24.934 00:04:24.934 real 0m0.316s 00:04:24.934 user 0m0.228s 00:04:24.934 sys 0m0.026s 00:04:24.934 ************************************ 00:04:24.934 END TEST rpc_daemon_integrity 00:04:24.934 ************************************ 00:04:24.934 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:24.934 08:17:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.934 08:17:12 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:24.934 08:17:12 rpc -- rpc/rpc.sh@84 -- # killprocess 56425 00:04:24.934 08:17:12 rpc -- common/autotest_common.sh@957 -- # '[' -z 56425 ']' 00:04:24.934 08:17:12 rpc -- common/autotest_common.sh@961 -- # kill -0 56425 00:04:24.934 08:17:12 rpc -- common/autotest_common.sh@962 -- # uname 00:04:24.934 08:17:12 rpc -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:04:24.934 08:17:12 rpc -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 56425 00:04:24.934 08:17:12 rpc -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:04:24.934 killing process with pid 56425 00:04:24.934 08:17:12 rpc -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:04:24.934 08:17:12 rpc -- common/autotest_common.sh@975 -- # echo 'killing process with pid 56425' 00:04:24.934 08:17:12 rpc -- common/autotest_common.sh@976 -- # kill 56425 00:04:24.934 08:17:12 rpc -- common/autotest_common.sh@981 -- # wait 56425 00:04:25.499 00:04:25.499 real 0m3.050s 00:04:25.499 user 0m3.930s 00:04:25.499 sys 0m0.725s 00:04:25.499 08:17:12 rpc -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:25.499 ************************************ 00:04:25.499 END TEST rpc 00:04:25.499 ************************************ 00:04:25.499 08:17:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.499 08:17:12 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:25.499 08:17:12 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:04:25.499 08:17:12 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:25.499 08:17:12 -- common/autotest_common.sh@10 -- # set +x 00:04:25.499 ************************************ 00:04:25.499 START TEST skip_rpc 00:04:25.499 ************************************ 00:04:25.499 08:17:12 skip_rpc -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:25.499 * Looking for test storage... 00:04:25.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:25.499 08:17:12 skip_rpc -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:04:25.499 08:17:12 skip_rpc -- common/autotest_common.sh@1638 -- # lcov --version 00:04:25.499 08:17:12 skip_rpc -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:04:25.756 08:17:13 skip_rpc -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.756 08:17:13 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:25.756 08:17:13 skip_rpc -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.756 08:17:13 skip_rpc -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:04:25.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.757 --rc genhtml_branch_coverage=1 00:04:25.757 --rc genhtml_function_coverage=1 00:04:25.757 --rc genhtml_legend=1 00:04:25.757 --rc geninfo_all_blocks=1 00:04:25.757 --rc geninfo_unexecuted_blocks=1 00:04:25.757 00:04:25.757 ' 00:04:25.757 08:17:13 skip_rpc -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:04:25.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.757 --rc genhtml_branch_coverage=1 00:04:25.757 --rc genhtml_function_coverage=1 00:04:25.757 --rc genhtml_legend=1 00:04:25.757 --rc geninfo_all_blocks=1 00:04:25.757 --rc geninfo_unexecuted_blocks=1 00:04:25.757 00:04:25.757 ' 00:04:25.757 08:17:13 skip_rpc -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:04:25.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.757 --rc genhtml_branch_coverage=1 00:04:25.757 --rc genhtml_function_coverage=1 00:04:25.757 --rc genhtml_legend=1 00:04:25.757 --rc geninfo_all_blocks=1 00:04:25.757 --rc geninfo_unexecuted_blocks=1 00:04:25.757 00:04:25.757 ' 00:04:25.757 08:17:13 skip_rpc -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:04:25.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.757 --rc genhtml_branch_coverage=1 00:04:25.757 --rc genhtml_function_coverage=1 00:04:25.757 --rc genhtml_legend=1 00:04:25.757 --rc geninfo_all_blocks=1 00:04:25.757 --rc geninfo_unexecuted_blocks=1 00:04:25.757 00:04:25.757 ' 00:04:25.757 08:17:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:25.757 08:17:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:25.757 08:17:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:25.757 08:17:13 skip_rpc -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:04:25.757 08:17:13 skip_rpc -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:25.757 08:17:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.757 ************************************ 00:04:25.757 START TEST skip_rpc 00:04:25.757 ************************************ 00:04:25.757 08:17:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1132 -- # test_skip_rpc 00:04:25.757 08:17:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56637 00:04:25.757 08:17:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.757 08:17:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:25.757 08:17:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:25.757 [2024-11-20 08:17:13.195743] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:04:25.757 [2024-11-20 08:17:13.196047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56637 ] 00:04:26.014 [2024-11-20 08:17:13.342909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.014 [2024-11-20 08:17:13.408091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.014 [2024-11-20 08:17:13.483398] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # local es=0 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@657 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@643 -- # local arg=rpc_cmd 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@647 -- # type -t rpc_cmd 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@658 -- # rpc_cmd spdk_get_version 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@594 -- # [[ 1 == 0 ]] 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@658 -- # es=1 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56637 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' -z 56637 ']' 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@961 -- # kill -0 56637 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # uname 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 56637 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@975 -- # echo 'killing process with pid 56637' 00:04:31.275 killing process with pid 56637 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # kill 56637 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@981 -- # wait 56637 00:04:31.275 00:04:31.275 real 0m5.424s 00:04:31.275 user 0m5.024s 00:04:31.275 sys 0m0.308s 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:31.275 ************************************ 00:04:31.275 END TEST skip_rpc 00:04:31.275 ************************************ 00:04:31.275 08:17:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.275 08:17:18 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:31.275 08:17:18 skip_rpc -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:04:31.275 08:17:18 skip_rpc -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:31.275 08:17:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.275 ************************************ 00:04:31.275 START TEST skip_rpc_with_json 00:04:31.275 ************************************ 00:04:31.275 08:17:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1132 -- # test_skip_rpc_with_json 00:04:31.275 08:17:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:31.275 08:17:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56723 00:04:31.275 08:17:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.275 08:17:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.275 08:17:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56723 00:04:31.275 08:17:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # '[' -z 56723 ']' 00:04:31.275 08:17:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.275 08:17:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@843 -- # local max_retries=100 00:04:31.275 08:17:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.275 08:17:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@847 -- # xtrace_disable 00:04:31.275 08:17:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.275 [2024-11-20 08:17:18.660161] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:04:31.275 [2024-11-20 08:17:18.660271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56723 ] 00:04:31.275 [2024-11-20 08:17:18.808888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.534 [2024-11-20 08:17:18.871747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.534 [2024-11-20 08:17:18.944903] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:31.794 08:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:04:31.794 08:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@871 -- # return 0 00:04:31.794 08:17:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:31.794 08:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:31.794 08:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.794 [2024-11-20 08:17:19.146668] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:31.794 request: 00:04:31.794 { 00:04:31.794 "trtype": "tcp", 00:04:31.794 "method": "nvmf_get_transports", 00:04:31.794 "req_id": 1 00:04:31.794 } 00:04:31.794 Got JSON-RPC error response 00:04:31.794 response: 00:04:31.794 { 00:04:31.794 "code": -19, 00:04:31.794 "message": "No such device" 00:04:31.794 } 00:04:31.794 08:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@594 -- # [[ 1 == 0 ]] 00:04:31.794 08:17:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:31.794 08:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:31.794 08:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.794 [2024-11-20 08:17:19.158754] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:31.794 08:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:31.794 08:17:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:31.794 08:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:31.794 08:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.794 08:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:31.794 08:17:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:31.794 { 00:04:31.794 "subsystems": [ 00:04:31.794 { 00:04:31.794 "subsystem": "fsdev", 00:04:31.794 "config": [ 00:04:31.794 { 00:04:31.794 "method": "fsdev_set_opts", 00:04:31.794 "params": { 00:04:31.794 "fsdev_io_pool_size": 65535, 00:04:31.794 "fsdev_io_cache_size": 256 00:04:31.794 } 00:04:31.794 } 00:04:31.794 ] 00:04:31.794 }, 00:04:31.794 { 00:04:31.794 "subsystem": "keyring", 00:04:31.794 "config": [] 00:04:31.794 }, 00:04:31.794 { 00:04:31.794 "subsystem": "iobuf", 00:04:31.794 "config": [ 00:04:31.794 { 00:04:31.794 "method": "iobuf_set_options", 00:04:31.794 "params": { 00:04:31.794 "small_pool_count": 8192, 00:04:31.794 "large_pool_count": 1024, 00:04:31.794 "small_bufsize": 8192, 00:04:31.794 "large_bufsize": 135168, 00:04:31.794 "enable_numa": false 00:04:31.794 } 00:04:31.794 } 00:04:31.794 ] 00:04:31.794 }, 00:04:31.794 { 00:04:31.794 "subsystem": "sock", 00:04:31.794 "config": [ 00:04:31.794 { 00:04:31.794 "method": "sock_set_default_impl", 00:04:31.794 "params": { 00:04:31.794 "impl_name": "uring" 00:04:31.794 } 00:04:31.794 }, 00:04:31.794 { 00:04:31.794 "method": "sock_impl_set_options", 00:04:31.794 "params": { 00:04:31.794 "impl_name": "ssl", 00:04:31.794 "recv_buf_size": 4096, 00:04:31.794 "send_buf_size": 4096, 00:04:31.794 "enable_recv_pipe": true, 00:04:31.794 "enable_quickack": false, 00:04:31.794 "enable_placement_id": 0, 00:04:31.794 "enable_zerocopy_send_server": true, 00:04:31.794 "enable_zerocopy_send_client": false, 00:04:31.794 "zerocopy_threshold": 0, 00:04:31.794 "tls_version": 0, 00:04:31.794 "enable_ktls": false 00:04:31.794 } 00:04:31.794 }, 00:04:31.794 { 00:04:31.794 "method": "sock_impl_set_options", 00:04:31.794 "params": { 00:04:31.794 "impl_name": "posix", 00:04:31.794 "recv_buf_size": 2097152, 00:04:31.794 "send_buf_size": 2097152, 00:04:31.794 "enable_recv_pipe": true, 00:04:31.794 "enable_quickack": false, 00:04:31.794 "enable_placement_id": 0, 00:04:31.794 "enable_zerocopy_send_server": true, 00:04:31.794 "enable_zerocopy_send_client": false, 00:04:31.794 "zerocopy_threshold": 0, 00:04:31.794 "tls_version": 0, 00:04:31.794 "enable_ktls": false 00:04:31.794 } 00:04:31.794 }, 00:04:31.794 { 00:04:31.794 "method": "sock_impl_set_options", 00:04:31.794 "params": { 00:04:31.794 "impl_name": "uring", 00:04:31.794 "recv_buf_size": 2097152, 00:04:31.794 "send_buf_size": 2097152, 00:04:31.794 "enable_recv_pipe": true, 00:04:31.794 "enable_quickack": false, 00:04:31.794 "enable_placement_id": 0, 00:04:31.794 "enable_zerocopy_send_server": false, 00:04:31.794 "enable_zerocopy_send_client": false, 00:04:31.794 "zerocopy_threshold": 0, 00:04:31.794 "tls_version": 0, 00:04:31.794 "enable_ktls": false 00:04:31.794 } 00:04:31.794 } 00:04:31.794 ] 00:04:31.794 }, 00:04:31.794 { 00:04:31.794 "subsystem": "vmd", 00:04:31.794 "config": [] 00:04:31.794 }, 00:04:31.794 { 00:04:31.794 "subsystem": "accel", 00:04:31.794 "config": [ 00:04:31.794 { 00:04:31.794 "method": "accel_set_options", 00:04:31.794 "params": { 00:04:31.794 "small_cache_size": 128, 00:04:31.794 "large_cache_size": 16, 00:04:31.794 "task_count": 2048, 00:04:31.794 "sequence_count": 2048, 00:04:31.794 "buf_count": 2048 00:04:31.794 } 00:04:31.794 } 00:04:31.794 ] 00:04:31.794 }, 00:04:31.794 { 00:04:31.794 "subsystem": "bdev", 00:04:31.794 "config": [ 00:04:31.794 { 00:04:31.794 "method": "bdev_set_options", 00:04:31.794 "params": { 00:04:31.794 "bdev_io_pool_size": 65535, 00:04:31.794 "bdev_io_cache_size": 256, 00:04:31.794 "bdev_auto_examine": true, 00:04:31.794 "iobuf_small_cache_size": 128, 00:04:31.794 "iobuf_large_cache_size": 16 00:04:31.794 } 00:04:31.794 }, 00:04:31.794 { 00:04:31.794 "method": "bdev_raid_set_options", 00:04:31.794 "params": { 00:04:31.794 "process_window_size_kb": 1024, 00:04:31.794 "process_max_bandwidth_mb_sec": 0 00:04:31.794 } 00:04:31.794 }, 00:04:31.794 { 00:04:31.794 "method": "bdev_iscsi_set_options", 00:04:31.794 "params": { 00:04:31.794 "timeout_sec": 30 00:04:31.794 } 00:04:31.794 }, 00:04:31.794 { 00:04:31.794 "method": "bdev_nvme_set_options", 00:04:31.794 "params": { 00:04:31.794 "action_on_timeout": "none", 00:04:31.794 "timeout_us": 0, 00:04:31.794 "timeout_admin_us": 0, 00:04:31.794 "keep_alive_timeout_ms": 10000, 00:04:31.794 "arbitration_burst": 0, 00:04:31.794 "low_priority_weight": 0, 00:04:31.794 "medium_priority_weight": 0, 00:04:31.794 "high_priority_weight": 0, 00:04:31.794 "nvme_adminq_poll_period_us": 10000, 00:04:31.794 "nvme_ioq_poll_period_us": 0, 00:04:31.794 "io_queue_requests": 0, 00:04:31.794 "delay_cmd_submit": true, 00:04:31.794 "transport_retry_count": 4, 00:04:31.794 "bdev_retry_count": 3, 00:04:31.794 "transport_ack_timeout": 0, 00:04:31.794 "ctrlr_loss_timeout_sec": 0, 00:04:31.794 "reconnect_delay_sec": 0, 00:04:31.794 "fast_io_fail_timeout_sec": 0, 00:04:31.794 "disable_auto_failback": false, 00:04:31.794 "generate_uuids": false, 00:04:31.794 "transport_tos": 0, 00:04:31.794 "nvme_error_stat": false, 00:04:31.794 "rdma_srq_size": 0, 00:04:31.794 "io_path_stat": false, 00:04:31.794 "allow_accel_sequence": false, 00:04:31.794 "rdma_max_cq_size": 0, 00:04:31.794 "rdma_cm_event_timeout_ms": 0, 00:04:31.794 "dhchap_digests": [ 00:04:31.794 "sha256", 00:04:31.795 "sha384", 00:04:31.795 "sha512" 00:04:31.795 ], 00:04:31.795 "dhchap_dhgroups": [ 00:04:31.795 "null", 00:04:31.795 "ffdhe2048", 00:04:31.795 "ffdhe3072", 00:04:31.795 "ffdhe4096", 00:04:31.795 "ffdhe6144", 00:04:31.795 "ffdhe8192" 00:04:31.795 ] 00:04:31.795 } 00:04:31.795 }, 00:04:31.795 { 00:04:31.795 "method": "bdev_nvme_set_hotplug", 00:04:31.795 "params": { 00:04:31.795 "period_us": 100000, 00:04:31.795 "enable": false 00:04:31.795 } 00:04:31.795 }, 00:04:31.795 { 00:04:31.795 "method": "bdev_wait_for_examine" 00:04:31.795 } 00:04:31.795 ] 00:04:31.795 }, 00:04:31.795 { 00:04:31.795 "subsystem": "scsi", 00:04:31.795 "config": null 00:04:31.795 }, 00:04:31.795 { 00:04:31.795 "subsystem": "scheduler", 00:04:31.795 "config": [ 00:04:31.795 { 00:04:31.795 "method": "framework_set_scheduler", 00:04:31.795 "params": { 00:04:31.795 "name": "static" 00:04:31.795 } 00:04:31.795 } 00:04:31.795 ] 00:04:31.795 }, 00:04:31.795 { 00:04:31.795 "subsystem": "vhost_scsi", 00:04:31.795 "config": [] 00:04:31.795 }, 00:04:31.795 { 00:04:31.795 "subsystem": "vhost_blk", 00:04:31.795 "config": [] 00:04:31.795 }, 00:04:31.795 { 00:04:31.795 "subsystem": "ublk", 00:04:31.795 "config": [] 00:04:31.795 }, 00:04:31.795 { 00:04:31.795 "subsystem": "nbd", 00:04:31.795 "config": [] 00:04:31.795 }, 00:04:31.795 { 00:04:31.795 "subsystem": "nvmf", 00:04:31.795 "config": [ 00:04:31.795 { 00:04:31.795 "method": "nvmf_set_config", 00:04:31.795 "params": { 00:04:31.795 "discovery_filter": "match_any", 00:04:31.795 "admin_cmd_passthru": { 00:04:31.795 "identify_ctrlr": false 00:04:31.795 }, 00:04:31.795 "dhchap_digests": [ 00:04:31.795 "sha256", 00:04:31.795 "sha384", 00:04:31.795 "sha512" 00:04:31.795 ], 00:04:31.795 "dhchap_dhgroups": [ 00:04:31.795 "null", 00:04:31.795 "ffdhe2048", 00:04:31.795 "ffdhe3072", 00:04:31.795 "ffdhe4096", 00:04:31.795 "ffdhe6144", 00:04:31.795 "ffdhe8192" 00:04:31.795 ] 00:04:31.795 } 00:04:31.795 }, 00:04:31.795 { 00:04:31.795 "method": "nvmf_set_max_subsystems", 00:04:31.795 "params": { 00:04:31.795 "max_subsystems": 1024 00:04:31.795 } 00:04:31.795 }, 00:04:31.795 { 00:04:31.795 "method": "nvmf_set_crdt", 00:04:31.795 "params": { 00:04:31.795 "crdt1": 0, 00:04:31.795 "crdt2": 0, 00:04:31.795 "crdt3": 0 00:04:31.795 } 00:04:31.795 }, 00:04:31.795 { 00:04:31.795 "method": "nvmf_create_transport", 00:04:31.795 "params": { 00:04:31.795 "trtype": "TCP", 00:04:31.795 "max_queue_depth": 128, 00:04:31.795 "max_io_qpairs_per_ctrlr": 127, 00:04:31.795 "in_capsule_data_size": 4096, 00:04:31.795 "max_io_size": 131072, 00:04:31.795 "io_unit_size": 131072, 00:04:31.795 "max_aq_depth": 128, 00:04:31.795 "num_shared_buffers": 511, 00:04:31.795 "buf_cache_size": 4294967295, 00:04:31.795 "dif_insert_or_strip": false, 00:04:31.795 "zcopy": false, 00:04:31.795 "c2h_success": true, 00:04:31.795 "sock_priority": 0, 00:04:31.795 "abort_timeout_sec": 1, 00:04:31.795 "ack_timeout": 0, 00:04:31.795 "data_wr_pool_size": 0 00:04:31.795 } 00:04:31.795 } 00:04:31.795 ] 00:04:31.795 }, 00:04:31.795 { 00:04:31.795 "subsystem": "iscsi", 00:04:31.795 "config": [ 00:04:31.795 { 00:04:31.795 "method": "iscsi_set_options", 00:04:31.795 "params": { 00:04:31.795 "node_base": "iqn.2016-06.io.spdk", 00:04:31.795 "max_sessions": 128, 00:04:31.795 "max_connections_per_session": 2, 00:04:31.795 "max_queue_depth": 64, 00:04:31.795 "default_time2wait": 2, 00:04:31.795 "default_time2retain": 20, 00:04:31.795 "first_burst_length": 8192, 00:04:31.795 "immediate_data": true, 00:04:31.795 "allow_duplicated_isid": false, 00:04:31.795 "error_recovery_level": 0, 00:04:31.795 "nop_timeout": 60, 00:04:31.795 "nop_in_interval": 30, 00:04:31.795 "disable_chap": false, 00:04:31.795 "require_chap": false, 00:04:31.795 "mutual_chap": false, 00:04:31.795 "chap_group": 0, 00:04:31.795 "max_large_datain_per_connection": 64, 00:04:31.795 "max_r2t_per_connection": 4, 00:04:31.795 "pdu_pool_size": 36864, 00:04:31.795 "immediate_data_pool_size": 16384, 00:04:31.795 "data_out_pool_size": 2048 00:04:31.795 } 00:04:31.795 } 00:04:31.795 ] 00:04:31.795 } 00:04:31.795 ] 00:04:31.795 } 00:04:31.795 08:17:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:31.795 08:17:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56723 00:04:31.795 08:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' -z 56723 ']' 00:04:31.795 08:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@961 -- # kill -0 56723 00:04:31.795 08:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # uname 00:04:31.795 08:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:04:31.795 08:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 56723 00:04:32.052 killing process with pid 56723 00:04:32.053 08:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:04:32.053 08:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:04:32.053 08:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@975 -- # echo 'killing process with pid 56723' 00:04:32.053 08:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # kill 56723 00:04:32.053 08:17:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@981 -- # wait 56723 00:04:32.309 08:17:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56743 00:04:32.309 08:17:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:32.309 08:17:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:37.653 08:17:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56743 00:04:37.653 08:17:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' -z 56743 ']' 00:04:37.653 08:17:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@961 -- # kill -0 56743 00:04:37.653 08:17:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # uname 00:04:37.653 08:17:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:04:37.653 08:17:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 56743 00:04:37.653 08:17:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:04:37.653 killing process with pid 56743 00:04:37.653 08:17:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:04:37.653 08:17:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@975 -- # echo 'killing process with pid 56743' 00:04:37.653 08:17:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # kill 56743 00:04:37.653 08:17:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@981 -- # wait 56743 00:04:37.653 08:17:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:37.653 08:17:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:37.653 00:04:37.653 real 0m6.594s 00:04:37.653 user 0m6.137s 00:04:37.653 sys 0m0.644s 00:04:37.653 08:17:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:37.653 08:17:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.653 ************************************ 00:04:37.653 END TEST skip_rpc_with_json 00:04:37.653 ************************************ 00:04:37.911 08:17:25 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:37.911 08:17:25 skip_rpc -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:04:37.911 08:17:25 skip_rpc -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:37.911 08:17:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.911 ************************************ 00:04:37.911 START TEST skip_rpc_with_delay 00:04:37.911 ************************************ 00:04:37.911 08:17:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1132 -- # test_skip_rpc_with_delay 00:04:37.911 08:17:25 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:37.911 08:17:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # local es=0 00:04:37.911 08:17:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:37.911 08:17:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.911 08:17:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:04:37.911 08:17:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.911 08:17:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:04:37.911 08:17:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.911 08:17:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:04:37.911 08:17:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.911 08:17:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:37.911 08:17:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:37.911 [2024-11-20 08:17:25.296823] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:37.912 08:17:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@658 -- # es=1 00:04:37.912 08:17:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:04:37.912 08:17:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:04:37.912 08:17:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:04:37.912 00:04:37.912 real 0m0.078s 00:04:37.912 user 0m0.053s 00:04:37.912 sys 0m0.024s 00:04:37.912 08:17:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:37.912 08:17:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:37.912 ************************************ 00:04:37.912 END TEST skip_rpc_with_delay 00:04:37.912 ************************************ 00:04:37.912 08:17:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:37.912 08:17:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:37.912 08:17:25 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:37.912 08:17:25 skip_rpc -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:04:37.912 08:17:25 skip_rpc -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:37.912 08:17:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.912 ************************************ 00:04:37.912 START TEST exit_on_failed_rpc_init 00:04:37.912 ************************************ 00:04:37.912 08:17:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1132 -- # test_exit_on_failed_rpc_init 00:04:37.912 08:17:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=56853 00:04:37.912 08:17:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.912 08:17:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 56853 00:04:37.912 08:17:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # '[' -z 56853 ']' 00:04:37.912 08:17:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.912 08:17:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@843 -- # local max_retries=100 00:04:37.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.912 08:17:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.912 08:17:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@847 -- # xtrace_disable 00:04:37.912 08:17:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:37.912 [2024-11-20 08:17:25.451531] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:04:37.912 [2024-11-20 08:17:25.451687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56853 ] 00:04:38.169 [2024-11-20 08:17:25.604374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.169 [2024-11-20 08:17:25.668685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.428 [2024-11-20 08:17:25.743069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:38.995 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:04:38.995 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@871 -- # return 0 00:04:38.995 08:17:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.995 08:17:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:38.995 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # local es=0 00:04:38.995 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:38.995 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.995 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:04:38.995 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.995 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:04:38.995 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.995 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:04:38.995 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.995 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:38.995 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:38.995 [2024-11-20 08:17:26.537418] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:04:38.995 [2024-11-20 08:17:26.537530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56871 ] 00:04:39.253 [2024-11-20 08:17:26.691037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.253 [2024-11-20 08:17:26.761337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.253 [2024-11-20 08:17:26.761441] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:39.253 [2024-11-20 08:17:26.761462] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:39.253 [2024-11-20 08:17:26.761474] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:39.511 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@658 -- # es=234 00:04:39.511 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:04:39.511 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@667 -- # es=106 00:04:39.511 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # case "$es" in 00:04:39.511 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # es=1 00:04:39.511 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:04:39.511 08:17:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:39.511 08:17:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 56853 00:04:39.511 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' -z 56853 ']' 00:04:39.511 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@961 -- # kill -0 56853 00:04:39.511 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # uname 00:04:39.511 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:04:39.511 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 56853 00:04:39.511 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:04:39.511 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:04:39.511 killing process with pid 56853 00:04:39.511 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@975 -- # echo 'killing process with pid 56853' 00:04:39.511 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # kill 56853 00:04:39.511 08:17:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@981 -- # wait 56853 00:04:40.077 00:04:40.077 real 0m2.020s 00:04:40.077 user 0m2.345s 00:04:40.077 sys 0m0.437s 00:04:40.077 08:17:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:40.077 ************************************ 00:04:40.077 08:17:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:40.077 END TEST exit_on_failed_rpc_init 00:04:40.077 ************************************ 00:04:40.077 08:17:27 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:40.077 00:04:40.077 real 0m14.550s 00:04:40.077 user 0m13.759s 00:04:40.077 sys 0m1.637s 00:04:40.077 08:17:27 skip_rpc -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:40.077 08:17:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.077 ************************************ 00:04:40.077 END TEST skip_rpc 00:04:40.077 ************************************ 00:04:40.077 08:17:27 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:40.077 08:17:27 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:04:40.077 08:17:27 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:40.077 08:17:27 -- common/autotest_common.sh@10 -- # set +x 00:04:40.077 ************************************ 00:04:40.077 START TEST rpc_client 00:04:40.077 ************************************ 00:04:40.077 08:17:27 rpc_client -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:40.077 * Looking for test storage... 00:04:40.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:40.077 08:17:27 rpc_client -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:04:40.077 08:17:27 rpc_client -- common/autotest_common.sh@1638 -- # lcov --version 00:04:40.077 08:17:27 rpc_client -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:04:40.336 08:17:27 rpc_client -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.336 08:17:27 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:40.336 08:17:27 rpc_client -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.336 08:17:27 rpc_client -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:04:40.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.336 --rc genhtml_branch_coverage=1 00:04:40.336 --rc genhtml_function_coverage=1 00:04:40.336 --rc genhtml_legend=1 00:04:40.336 --rc geninfo_all_blocks=1 00:04:40.336 --rc geninfo_unexecuted_blocks=1 00:04:40.336 00:04:40.336 ' 00:04:40.336 08:17:27 rpc_client -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:04:40.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.336 --rc genhtml_branch_coverage=1 00:04:40.336 --rc genhtml_function_coverage=1 00:04:40.336 --rc genhtml_legend=1 00:04:40.336 --rc geninfo_all_blocks=1 00:04:40.336 --rc geninfo_unexecuted_blocks=1 00:04:40.336 00:04:40.336 ' 00:04:40.336 08:17:27 rpc_client -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:04:40.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.336 --rc genhtml_branch_coverage=1 00:04:40.336 --rc genhtml_function_coverage=1 00:04:40.336 --rc genhtml_legend=1 00:04:40.336 --rc geninfo_all_blocks=1 00:04:40.336 --rc geninfo_unexecuted_blocks=1 00:04:40.336 00:04:40.336 ' 00:04:40.336 08:17:27 rpc_client -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:04:40.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.336 --rc genhtml_branch_coverage=1 00:04:40.336 --rc genhtml_function_coverage=1 00:04:40.336 --rc genhtml_legend=1 00:04:40.336 --rc geninfo_all_blocks=1 00:04:40.336 --rc geninfo_unexecuted_blocks=1 00:04:40.336 00:04:40.336 ' 00:04:40.336 08:17:27 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:40.336 OK 00:04:40.336 08:17:27 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:40.336 00:04:40.336 real 0m0.236s 00:04:40.336 user 0m0.135s 00:04:40.336 sys 0m0.113s 00:04:40.336 08:17:27 rpc_client -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:40.336 08:17:27 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:40.336 ************************************ 00:04:40.336 END TEST rpc_client 00:04:40.336 ************************************ 00:04:40.336 08:17:27 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:40.336 08:17:27 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:04:40.336 08:17:27 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:40.336 08:17:27 -- common/autotest_common.sh@10 -- # set +x 00:04:40.336 ************************************ 00:04:40.336 START TEST json_config 00:04:40.336 ************************************ 00:04:40.336 08:17:27 json_config -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:40.336 08:17:27 json_config -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:04:40.336 08:17:27 json_config -- common/autotest_common.sh@1638 -- # lcov --version 00:04:40.336 08:17:27 json_config -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:04:40.595 08:17:27 json_config -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:04:40.595 08:17:27 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.595 08:17:27 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.595 08:17:27 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.595 08:17:27 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.595 08:17:27 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.595 08:17:27 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.595 08:17:27 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.595 08:17:27 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.595 08:17:27 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.595 08:17:27 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.595 08:17:27 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.595 08:17:27 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:40.595 08:17:27 json_config -- scripts/common.sh@345 -- # : 1 00:04:40.595 08:17:27 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.595 08:17:27 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.595 08:17:27 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:40.595 08:17:27 json_config -- scripts/common.sh@353 -- # local d=1 00:04:40.595 08:17:27 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.596 08:17:27 json_config -- scripts/common.sh@355 -- # echo 1 00:04:40.596 08:17:27 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.596 08:17:27 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:40.596 08:17:27 json_config -- scripts/common.sh@353 -- # local d=2 00:04:40.596 08:17:27 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.596 08:17:27 json_config -- scripts/common.sh@355 -- # echo 2 00:04:40.596 08:17:27 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.596 08:17:27 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.596 08:17:27 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.596 08:17:27 json_config -- scripts/common.sh@368 -- # return 0 00:04:40.596 08:17:27 json_config -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.596 08:17:27 json_config -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:04:40.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.596 --rc genhtml_branch_coverage=1 00:04:40.596 --rc genhtml_function_coverage=1 00:04:40.596 --rc genhtml_legend=1 00:04:40.596 --rc geninfo_all_blocks=1 00:04:40.596 --rc geninfo_unexecuted_blocks=1 00:04:40.596 00:04:40.596 ' 00:04:40.596 08:17:27 json_config -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:04:40.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.596 --rc genhtml_branch_coverage=1 00:04:40.596 --rc genhtml_function_coverage=1 00:04:40.596 --rc genhtml_legend=1 00:04:40.596 --rc geninfo_all_blocks=1 00:04:40.596 --rc geninfo_unexecuted_blocks=1 00:04:40.596 00:04:40.596 ' 00:04:40.596 08:17:27 json_config -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:04:40.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.596 --rc genhtml_branch_coverage=1 00:04:40.596 --rc genhtml_function_coverage=1 00:04:40.596 --rc genhtml_legend=1 00:04:40.596 --rc geninfo_all_blocks=1 00:04:40.596 --rc geninfo_unexecuted_blocks=1 00:04:40.596 00:04:40.596 ' 00:04:40.596 08:17:27 json_config -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:04:40.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.596 --rc genhtml_branch_coverage=1 00:04:40.596 --rc genhtml_function_coverage=1 00:04:40.596 --rc genhtml_legend=1 00:04:40.596 --rc geninfo_all_blocks=1 00:04:40.596 --rc geninfo_unexecuted_blocks=1 00:04:40.596 00:04:40.596 ' 00:04:40.596 08:17:27 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:40.596 08:17:27 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:40.596 08:17:27 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:40.596 08:17:27 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:40.596 08:17:27 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:40.596 08:17:27 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.596 08:17:27 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.596 08:17:27 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.596 08:17:27 json_config -- paths/export.sh@5 -- # export PATH 00:04:40.596 08:17:27 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@51 -- # : 0 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:40.596 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:40.596 08:17:27 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:40.596 08:17:27 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:40.596 08:17:27 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:40.596 08:17:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:40.596 08:17:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:40.596 08:17:27 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:40.596 08:17:27 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:40.596 08:17:27 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:40.596 08:17:27 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:40.596 08:17:27 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:40.596 08:17:27 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:40.596 08:17:27 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:40.596 08:17:27 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:40.596 08:17:27 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:40.596 08:17:27 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:40.596 08:17:27 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:40.596 INFO: JSON configuration test init 00:04:40.596 08:17:27 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:40.596 08:17:27 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:40.596 08:17:27 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:40.596 08:17:27 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:40.596 08:17:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.596 08:17:27 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:40.596 08:17:27 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:40.596 08:17:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.596 08:17:27 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:40.596 08:17:27 json_config -- json_config/common.sh@9 -- # local app=target 00:04:40.596 08:17:27 json_config -- json_config/common.sh@10 -- # shift 00:04:40.596 08:17:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:40.596 08:17:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:40.596 08:17:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:40.596 08:17:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:40.596 08:17:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:40.596 08:17:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57022 00:04:40.596 Waiting for target to run... 00:04:40.596 08:17:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:40.596 08:17:27 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:40.596 08:17:27 json_config -- json_config/common.sh@25 -- # waitforlisten 57022 /var/tmp/spdk_tgt.sock 00:04:40.596 08:17:27 json_config -- common/autotest_common.sh@838 -- # '[' -z 57022 ']' 00:04:40.596 08:17:27 json_config -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:40.597 08:17:27 json_config -- common/autotest_common.sh@843 -- # local max_retries=100 00:04:40.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:40.597 08:17:27 json_config -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:40.597 08:17:27 json_config -- common/autotest_common.sh@847 -- # xtrace_disable 00:04:40.597 08:17:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.597 [2024-11-20 08:17:28.033203] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:04:40.597 [2024-11-20 08:17:28.033325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57022 ] 00:04:41.163 [2024-11-20 08:17:28.454948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.163 [2024-11-20 08:17:28.523581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.730 08:17:29 json_config -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:04:41.730 08:17:29 json_config -- common/autotest_common.sh@871 -- # return 0 00:04:41.730 00:04:41.730 08:17:29 json_config -- json_config/common.sh@26 -- # echo '' 00:04:41.730 08:17:29 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:41.730 08:17:29 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:41.730 08:17:29 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:41.730 08:17:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.730 08:17:29 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:41.730 08:17:29 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:41.730 08:17:29 json_config -- common/autotest_common.sh@735 -- # xtrace_disable 00:04:41.730 08:17:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.730 08:17:29 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:41.730 08:17:29 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:41.730 08:17:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:41.988 [2024-11-20 08:17:29.466673] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:42.246 08:17:29 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:42.246 08:17:29 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:42.246 08:17:29 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:42.246 08:17:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.246 08:17:29 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:42.246 08:17:29 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:42.246 08:17:29 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:42.246 08:17:29 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:42.246 08:17:29 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:42.246 08:17:29 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:42.246 08:17:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:42.246 08:17:29 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:42.504 08:17:30 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:42.504 08:17:30 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:42.504 08:17:30 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:42.504 08:17:30 json_config -- json_config/json_config.sh@54 -- # sort 00:04:42.504 08:17:30 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:42.504 08:17:30 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:42.504 08:17:30 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:42.504 08:17:30 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:42.504 08:17:30 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:42.504 08:17:30 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:42.504 08:17:30 json_config -- common/autotest_common.sh@735 -- # xtrace_disable 00:04:42.504 08:17:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.763 08:17:30 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:42.763 08:17:30 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:42.763 08:17:30 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:42.763 08:17:30 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:42.763 08:17:30 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:42.763 08:17:30 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:42.763 08:17:30 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:42.763 08:17:30 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:42.763 08:17:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.763 08:17:30 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:42.763 08:17:30 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:42.763 08:17:30 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:42.763 08:17:30 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:42.763 08:17:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:43.020 MallocForNvmf0 00:04:43.020 08:17:30 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:43.021 08:17:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:43.279 MallocForNvmf1 00:04:43.279 08:17:30 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:43.279 08:17:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:43.537 [2024-11-20 08:17:30.916313] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:43.537 08:17:30 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:43.537 08:17:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:43.795 08:17:31 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:43.795 08:17:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:44.057 08:17:31 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:44.057 08:17:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:44.324 08:17:31 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:44.324 08:17:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:44.582 [2024-11-20 08:17:32.016964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:44.582 08:17:32 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:44.582 08:17:32 json_config -- common/autotest_common.sh@735 -- # xtrace_disable 00:04:44.582 08:17:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.582 08:17:32 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:44.582 08:17:32 json_config -- common/autotest_common.sh@735 -- # xtrace_disable 00:04:44.582 08:17:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.582 08:17:32 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:44.582 08:17:32 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:44.582 08:17:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:44.840 MallocBdevForConfigChangeCheck 00:04:44.840 08:17:32 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:44.840 08:17:32 json_config -- common/autotest_common.sh@735 -- # xtrace_disable 00:04:44.840 08:17:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.098 08:17:32 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:45.098 08:17:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:45.357 INFO: shutting down applications... 00:04:45.357 08:17:32 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:45.357 08:17:32 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:45.357 08:17:32 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:45.357 08:17:32 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:45.357 08:17:32 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:45.615 Calling clear_iscsi_subsystem 00:04:45.615 Calling clear_nvmf_subsystem 00:04:45.615 Calling clear_nbd_subsystem 00:04:45.615 Calling clear_ublk_subsystem 00:04:45.615 Calling clear_vhost_blk_subsystem 00:04:45.615 Calling clear_vhost_scsi_subsystem 00:04:45.615 Calling clear_bdev_subsystem 00:04:45.615 08:17:33 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:45.615 08:17:33 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:45.615 08:17:33 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:45.615 08:17:33 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:45.615 08:17:33 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:45.615 08:17:33 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:46.182 08:17:33 json_config -- json_config/json_config.sh@352 -- # break 00:04:46.182 08:17:33 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:46.182 08:17:33 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:46.182 08:17:33 json_config -- json_config/common.sh@31 -- # local app=target 00:04:46.182 08:17:33 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:46.182 08:17:33 json_config -- json_config/common.sh@35 -- # [[ -n 57022 ]] 00:04:46.182 08:17:33 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57022 00:04:46.182 08:17:33 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:46.182 08:17:33 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.182 08:17:33 json_config -- json_config/common.sh@41 -- # kill -0 57022 00:04:46.182 08:17:33 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.749 08:17:34 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.749 08:17:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.749 08:17:34 json_config -- json_config/common.sh@41 -- # kill -0 57022 00:04:46.749 08:17:34 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:46.749 08:17:34 json_config -- json_config/common.sh@43 -- # break 00:04:46.749 08:17:34 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:46.749 SPDK target shutdown done 00:04:46.750 INFO: relaunching applications... 00:04:46.750 08:17:34 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:46.750 08:17:34 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:46.750 08:17:34 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:46.750 08:17:34 json_config -- json_config/common.sh@9 -- # local app=target 00:04:46.750 08:17:34 json_config -- json_config/common.sh@10 -- # shift 00:04:46.750 08:17:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:46.750 08:17:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:46.750 08:17:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:46.750 08:17:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.750 08:17:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.750 Waiting for target to run... 00:04:46.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:46.750 08:17:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57223 00:04:46.750 08:17:34 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:46.750 08:17:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:46.750 08:17:34 json_config -- json_config/common.sh@25 -- # waitforlisten 57223 /var/tmp/spdk_tgt.sock 00:04:46.750 08:17:34 json_config -- common/autotest_common.sh@838 -- # '[' -z 57223 ']' 00:04:46.750 08:17:34 json_config -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:46.750 08:17:34 json_config -- common/autotest_common.sh@843 -- # local max_retries=100 00:04:46.750 08:17:34 json_config -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:46.750 08:17:34 json_config -- common/autotest_common.sh@847 -- # xtrace_disable 00:04:46.750 08:17:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.750 [2024-11-20 08:17:34.195425] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:04:46.750 [2024-11-20 08:17:34.195794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57223 ] 00:04:47.316 [2024-11-20 08:17:34.618114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.316 [2024-11-20 08:17:34.675409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.316 [2024-11-20 08:17:34.814384] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:47.575 [2024-11-20 08:17:35.031287] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.575 [2024-11-20 08:17:35.063315] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:47.833 00:04:47.833 INFO: Checking if target configuration is the same... 00:04:47.833 08:17:35 json_config -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:04:47.833 08:17:35 json_config -- common/autotest_common.sh@871 -- # return 0 00:04:47.833 08:17:35 json_config -- json_config/common.sh@26 -- # echo '' 00:04:47.833 08:17:35 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:47.833 08:17:35 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:47.833 08:17:35 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:47.833 08:17:35 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:47.833 08:17:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:47.833 + '[' 2 -ne 2 ']' 00:04:47.833 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:47.833 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:47.833 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:47.833 +++ basename /dev/fd/62 00:04:47.833 ++ mktemp /tmp/62.XXX 00:04:47.833 + tmp_file_1=/tmp/62.Zf9 00:04:47.833 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:47.833 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:47.833 + tmp_file_2=/tmp/spdk_tgt_config.json.L7Z 00:04:47.833 + ret=0 00:04:47.833 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:48.091 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:48.091 + diff -u /tmp/62.Zf9 /tmp/spdk_tgt_config.json.L7Z 00:04:48.091 INFO: JSON config files are the same 00:04:48.091 + echo 'INFO: JSON config files are the same' 00:04:48.091 + rm /tmp/62.Zf9 /tmp/spdk_tgt_config.json.L7Z 00:04:48.091 + exit 0 00:04:48.091 INFO: changing configuration and checking if this can be detected... 00:04:48.091 08:17:35 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:48.091 08:17:35 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:48.091 08:17:35 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:48.091 08:17:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:48.656 08:17:35 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:48.656 08:17:35 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:48.656 08:17:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.656 + '[' 2 -ne 2 ']' 00:04:48.656 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:48.656 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:48.656 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:48.656 +++ basename /dev/fd/62 00:04:48.656 ++ mktemp /tmp/62.XXX 00:04:48.656 + tmp_file_1=/tmp/62.eSG 00:04:48.656 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:48.656 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:48.656 + tmp_file_2=/tmp/spdk_tgt_config.json.iMU 00:04:48.656 + ret=0 00:04:48.656 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:48.914 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:48.914 + diff -u /tmp/62.eSG /tmp/spdk_tgt_config.json.iMU 00:04:48.914 + ret=1 00:04:48.914 + echo '=== Start of file: /tmp/62.eSG ===' 00:04:48.914 + cat /tmp/62.eSG 00:04:48.914 + echo '=== End of file: /tmp/62.eSG ===' 00:04:48.914 + echo '' 00:04:48.914 + echo '=== Start of file: /tmp/spdk_tgt_config.json.iMU ===' 00:04:48.914 + cat /tmp/spdk_tgt_config.json.iMU 00:04:48.914 + echo '=== End of file: /tmp/spdk_tgt_config.json.iMU ===' 00:04:48.914 + echo '' 00:04:48.914 + rm /tmp/62.eSG /tmp/spdk_tgt_config.json.iMU 00:04:48.914 + exit 1 00:04:48.914 INFO: configuration change detected. 00:04:48.914 08:17:36 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:48.914 08:17:36 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:48.914 08:17:36 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:48.914 08:17:36 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:48.914 08:17:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.914 08:17:36 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:48.914 08:17:36 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:48.914 08:17:36 json_config -- json_config/json_config.sh@324 -- # [[ -n 57223 ]] 00:04:48.914 08:17:36 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:48.914 08:17:36 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:48.914 08:17:36 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:48.914 08:17:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.914 08:17:36 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:48.914 08:17:36 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:48.914 08:17:36 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:48.914 08:17:36 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:48.914 08:17:36 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:48.914 08:17:36 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:48.914 08:17:36 json_config -- common/autotest_common.sh@735 -- # xtrace_disable 00:04:48.914 08:17:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.173 08:17:36 json_config -- json_config/json_config.sh@330 -- # killprocess 57223 00:04:49.173 08:17:36 json_config -- common/autotest_common.sh@957 -- # '[' -z 57223 ']' 00:04:49.173 08:17:36 json_config -- common/autotest_common.sh@961 -- # kill -0 57223 00:04:49.173 08:17:36 json_config -- common/autotest_common.sh@962 -- # uname 00:04:49.173 08:17:36 json_config -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:04:49.173 08:17:36 json_config -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 57223 00:04:49.173 08:17:36 json_config -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:04:49.173 killing process with pid 57223 00:04:49.173 08:17:36 json_config -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:04:49.173 08:17:36 json_config -- common/autotest_common.sh@975 -- # echo 'killing process with pid 57223' 00:04:49.173 08:17:36 json_config -- common/autotest_common.sh@976 -- # kill 57223 00:04:49.173 08:17:36 json_config -- common/autotest_common.sh@981 -- # wait 57223 00:04:49.431 08:17:36 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:49.431 08:17:36 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:49.431 08:17:36 json_config -- common/autotest_common.sh@735 -- # xtrace_disable 00:04:49.431 08:17:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.431 08:17:36 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:49.431 INFO: Success 00:04:49.431 08:17:36 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:49.431 00:04:49.431 real 0m9.032s 00:04:49.431 user 0m13.084s 00:04:49.431 sys 0m1.812s 00:04:49.431 08:17:36 json_config -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:49.431 08:17:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.431 ************************************ 00:04:49.431 END TEST json_config 00:04:49.431 ************************************ 00:04:49.431 08:17:36 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:49.431 08:17:36 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:04:49.431 08:17:36 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:49.431 08:17:36 -- common/autotest_common.sh@10 -- # set +x 00:04:49.431 ************************************ 00:04:49.431 START TEST json_config_extra_key 00:04:49.431 ************************************ 00:04:49.431 08:17:36 json_config_extra_key -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:49.431 08:17:36 json_config_extra_key -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:04:49.431 08:17:36 json_config_extra_key -- common/autotest_common.sh@1638 -- # lcov --version 00:04:49.431 08:17:36 json_config_extra_key -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:04:49.690 08:17:37 json_config_extra_key -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:49.690 08:17:37 json_config_extra_key -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.690 08:17:37 json_config_extra_key -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:04:49.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.690 --rc genhtml_branch_coverage=1 00:04:49.690 --rc genhtml_function_coverage=1 00:04:49.690 --rc genhtml_legend=1 00:04:49.690 --rc geninfo_all_blocks=1 00:04:49.690 --rc geninfo_unexecuted_blocks=1 00:04:49.690 00:04:49.690 ' 00:04:49.690 08:17:37 json_config_extra_key -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:04:49.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.690 --rc genhtml_branch_coverage=1 00:04:49.690 --rc genhtml_function_coverage=1 00:04:49.690 --rc genhtml_legend=1 00:04:49.690 --rc geninfo_all_blocks=1 00:04:49.690 --rc geninfo_unexecuted_blocks=1 00:04:49.690 00:04:49.690 ' 00:04:49.690 08:17:37 json_config_extra_key -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:04:49.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.690 --rc genhtml_branch_coverage=1 00:04:49.690 --rc genhtml_function_coverage=1 00:04:49.690 --rc genhtml_legend=1 00:04:49.690 --rc geninfo_all_blocks=1 00:04:49.690 --rc geninfo_unexecuted_blocks=1 00:04:49.690 00:04:49.690 ' 00:04:49.690 08:17:37 json_config_extra_key -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:04:49.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.690 --rc genhtml_branch_coverage=1 00:04:49.690 --rc genhtml_function_coverage=1 00:04:49.690 --rc genhtml_legend=1 00:04:49.690 --rc geninfo_all_blocks=1 00:04:49.690 --rc geninfo_unexecuted_blocks=1 00:04:49.690 00:04:49.690 ' 00:04:49.690 08:17:37 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:49.690 08:17:37 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:49.690 08:17:37 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:49.690 08:17:37 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:49.690 08:17:37 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:49.690 08:17:37 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:49.690 08:17:37 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:49.690 08:17:37 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:49.690 08:17:37 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:49.690 08:17:37 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:49.690 08:17:37 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:49.690 08:17:37 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:49.690 08:17:37 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:04:49.690 08:17:37 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:04:49.690 08:17:37 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:49.690 08:17:37 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:49.690 08:17:37 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:49.690 08:17:37 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:49.690 08:17:37 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:49.690 08:17:37 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:49.691 08:17:37 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.691 08:17:37 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.691 08:17:37 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.691 08:17:37 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:49.691 08:17:37 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.691 08:17:37 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:49.691 08:17:37 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:49.691 08:17:37 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:49.691 08:17:37 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:49.691 08:17:37 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:49.691 08:17:37 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:49.691 08:17:37 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:49.691 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:49.691 08:17:37 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:49.691 08:17:37 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:49.691 08:17:37 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:49.691 08:17:37 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:49.691 08:17:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:49.691 08:17:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:49.691 08:17:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:49.691 08:17:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:49.691 08:17:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:49.691 08:17:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:49.691 08:17:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:49.691 08:17:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:49.691 INFO: launching applications... 00:04:49.691 08:17:37 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:49.691 08:17:37 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:49.691 08:17:37 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:49.691 08:17:37 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:49.691 08:17:37 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:49.691 08:17:37 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:49.691 08:17:37 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:49.691 08:17:37 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:49.691 08:17:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.691 08:17:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.691 08:17:37 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57378 00:04:49.691 Waiting for target to run... 00:04:49.691 08:17:37 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:49.691 08:17:37 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:49.691 08:17:37 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57378 /var/tmp/spdk_tgt.sock 00:04:49.691 08:17:37 json_config_extra_key -- common/autotest_common.sh@838 -- # '[' -z 57378 ']' 00:04:49.691 08:17:37 json_config_extra_key -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:49.691 08:17:37 json_config_extra_key -- common/autotest_common.sh@843 -- # local max_retries=100 00:04:49.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:49.691 08:17:37 json_config_extra_key -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:49.691 08:17:37 json_config_extra_key -- common/autotest_common.sh@847 -- # xtrace_disable 00:04:49.691 08:17:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:49.691 [2024-11-20 08:17:37.171203] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:04:49.691 [2024-11-20 08:17:37.171358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57378 ] 00:04:50.257 [2024-11-20 08:17:37.627181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.257 [2024-11-20 08:17:37.680695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.257 [2024-11-20 08:17:37.711794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:50.822 08:17:38 json_config_extra_key -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:04:50.822 08:17:38 json_config_extra_key -- common/autotest_common.sh@871 -- # return 0 00:04:50.822 00:04:50.822 08:17:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:50.822 INFO: shutting down applications... 00:04:50.822 08:17:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:50.822 08:17:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:50.822 08:17:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:50.822 08:17:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:50.822 08:17:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57378 ]] 00:04:50.822 08:17:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57378 00:04:50.822 08:17:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:50.822 08:17:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.822 08:17:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57378 00:04:50.822 08:17:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.388 08:17:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.388 08:17:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.388 08:17:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57378 00:04:51.388 08:17:38 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:51.388 08:17:38 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:51.388 08:17:38 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:51.388 SPDK target shutdown done 00:04:51.388 08:17:38 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:51.388 Success 00:04:51.388 08:17:38 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:51.388 00:04:51.388 real 0m1.883s 00:04:51.388 user 0m1.804s 00:04:51.388 sys 0m0.505s 00:04:51.388 08:17:38 json_config_extra_key -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:51.388 ************************************ 00:04:51.388 END TEST json_config_extra_key 00:04:51.388 ************************************ 00:04:51.388 08:17:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:51.388 08:17:38 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:51.388 08:17:38 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:04:51.388 08:17:38 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:51.388 08:17:38 -- common/autotest_common.sh@10 -- # set +x 00:04:51.388 ************************************ 00:04:51.388 START TEST alias_rpc 00:04:51.388 ************************************ 00:04:51.388 08:17:38 alias_rpc -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:51.388 * Looking for test storage... 00:04:51.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:51.388 08:17:38 alias_rpc -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:04:51.388 08:17:38 alias_rpc -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:04:51.388 08:17:38 alias_rpc -- common/autotest_common.sh@1638 -- # lcov --version 00:04:51.647 08:17:38 alias_rpc -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.647 08:17:38 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:51.647 08:17:38 alias_rpc -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.647 08:17:38 alias_rpc -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:04:51.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.647 --rc genhtml_branch_coverage=1 00:04:51.647 --rc genhtml_function_coverage=1 00:04:51.647 --rc genhtml_legend=1 00:04:51.647 --rc geninfo_all_blocks=1 00:04:51.647 --rc geninfo_unexecuted_blocks=1 00:04:51.647 00:04:51.647 ' 00:04:51.647 08:17:38 alias_rpc -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:04:51.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.647 --rc genhtml_branch_coverage=1 00:04:51.647 --rc genhtml_function_coverage=1 00:04:51.647 --rc genhtml_legend=1 00:04:51.647 --rc geninfo_all_blocks=1 00:04:51.647 --rc geninfo_unexecuted_blocks=1 00:04:51.647 00:04:51.647 ' 00:04:51.647 08:17:38 alias_rpc -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:04:51.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.647 --rc genhtml_branch_coverage=1 00:04:51.647 --rc genhtml_function_coverage=1 00:04:51.647 --rc genhtml_legend=1 00:04:51.647 --rc geninfo_all_blocks=1 00:04:51.647 --rc geninfo_unexecuted_blocks=1 00:04:51.647 00:04:51.647 ' 00:04:51.647 08:17:38 alias_rpc -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:04:51.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.647 --rc genhtml_branch_coverage=1 00:04:51.647 --rc genhtml_function_coverage=1 00:04:51.647 --rc genhtml_legend=1 00:04:51.647 --rc geninfo_all_blocks=1 00:04:51.647 --rc geninfo_unexecuted_blocks=1 00:04:51.647 00:04:51.647 ' 00:04:51.647 08:17:38 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:51.647 08:17:38 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57462 00:04:51.647 08:17:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57462 00:04:51.647 08:17:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.647 08:17:39 alias_rpc -- common/autotest_common.sh@838 -- # '[' -z 57462 ']' 00:04:51.647 08:17:39 alias_rpc -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.647 08:17:39 alias_rpc -- common/autotest_common.sh@843 -- # local max_retries=100 00:04:51.647 08:17:39 alias_rpc -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.647 08:17:39 alias_rpc -- common/autotest_common.sh@847 -- # xtrace_disable 00:04:51.647 08:17:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.647 [2024-11-20 08:17:39.067947] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:04:51.648 [2024-11-20 08:17:39.068287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57462 ] 00:04:51.905 [2024-11-20 08:17:39.218148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.905 [2024-11-20 08:17:39.283374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.905 [2024-11-20 08:17:39.357035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:52.163 08:17:39 alias_rpc -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:04:52.163 08:17:39 alias_rpc -- common/autotest_common.sh@871 -- # return 0 00:04:52.163 08:17:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:52.421 08:17:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57462 00:04:52.421 08:17:39 alias_rpc -- common/autotest_common.sh@957 -- # '[' -z 57462 ']' 00:04:52.421 08:17:39 alias_rpc -- common/autotest_common.sh@961 -- # kill -0 57462 00:04:52.421 08:17:39 alias_rpc -- common/autotest_common.sh@962 -- # uname 00:04:52.421 08:17:39 alias_rpc -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:04:52.421 08:17:39 alias_rpc -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 57462 00:04:52.421 killing process with pid 57462 00:04:52.421 08:17:39 alias_rpc -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:04:52.421 08:17:39 alias_rpc -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:04:52.421 08:17:39 alias_rpc -- common/autotest_common.sh@975 -- # echo 'killing process with pid 57462' 00:04:52.421 08:17:39 alias_rpc -- common/autotest_common.sh@976 -- # kill 57462 00:04:52.421 08:17:39 alias_rpc -- common/autotest_common.sh@981 -- # wait 57462 00:04:52.988 ************************************ 00:04:52.988 END TEST alias_rpc 00:04:52.988 ************************************ 00:04:52.988 00:04:52.988 real 0m1.526s 00:04:52.988 user 0m1.640s 00:04:52.988 sys 0m0.451s 00:04:52.988 08:17:40 alias_rpc -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:52.988 08:17:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.988 08:17:40 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:52.988 08:17:40 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:52.988 08:17:40 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:04:52.988 08:17:40 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:52.988 08:17:40 -- common/autotest_common.sh@10 -- # set +x 00:04:52.988 ************************************ 00:04:52.988 START TEST spdkcli_tcp 00:04:52.988 ************************************ 00:04:52.988 08:17:40 spdkcli_tcp -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:52.988 * Looking for test storage... 00:04:52.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:52.988 08:17:40 spdkcli_tcp -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:04:52.989 08:17:40 spdkcli_tcp -- common/autotest_common.sh@1638 -- # lcov --version 00:04:52.989 08:17:40 spdkcli_tcp -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:04:52.989 08:17:40 spdkcli_tcp -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:04:52.989 08:17:40 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.989 08:17:40 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.989 08:17:40 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.989 08:17:40 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.989 08:17:40 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.989 08:17:40 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.989 08:17:40 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.989 08:17:40 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.989 08:17:40 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.989 08:17:40 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.989 08:17:40 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.989 08:17:40 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:52.989 08:17:40 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:52.989 08:17:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.989 08:17:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.989 08:17:40 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:52.989 08:17:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:52.989 08:17:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.989 08:17:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:53.247 08:17:40 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.247 08:17:40 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:53.247 08:17:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:53.247 08:17:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.247 08:17:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:53.247 08:17:40 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.247 08:17:40 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.247 08:17:40 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.247 08:17:40 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:53.247 08:17:40 spdkcli_tcp -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.247 08:17:40 spdkcli_tcp -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:04:53.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.247 --rc genhtml_branch_coverage=1 00:04:53.247 --rc genhtml_function_coverage=1 00:04:53.247 --rc genhtml_legend=1 00:04:53.247 --rc geninfo_all_blocks=1 00:04:53.247 --rc geninfo_unexecuted_blocks=1 00:04:53.247 00:04:53.247 ' 00:04:53.247 08:17:40 spdkcli_tcp -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:04:53.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.247 --rc genhtml_branch_coverage=1 00:04:53.247 --rc genhtml_function_coverage=1 00:04:53.247 --rc genhtml_legend=1 00:04:53.247 --rc geninfo_all_blocks=1 00:04:53.247 --rc geninfo_unexecuted_blocks=1 00:04:53.247 00:04:53.247 ' 00:04:53.247 08:17:40 spdkcli_tcp -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:04:53.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.247 --rc genhtml_branch_coverage=1 00:04:53.247 --rc genhtml_function_coverage=1 00:04:53.247 --rc genhtml_legend=1 00:04:53.247 --rc geninfo_all_blocks=1 00:04:53.247 --rc geninfo_unexecuted_blocks=1 00:04:53.247 00:04:53.247 ' 00:04:53.247 08:17:40 spdkcli_tcp -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:04:53.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.247 --rc genhtml_branch_coverage=1 00:04:53.247 --rc genhtml_function_coverage=1 00:04:53.247 --rc genhtml_legend=1 00:04:53.247 --rc geninfo_all_blocks=1 00:04:53.247 --rc geninfo_unexecuted_blocks=1 00:04:53.247 00:04:53.247 ' 00:04:53.247 08:17:40 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:53.247 08:17:40 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:53.248 08:17:40 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:53.248 08:17:40 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:53.248 08:17:40 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:53.248 08:17:40 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:53.248 08:17:40 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:53.248 08:17:40 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:53.248 08:17:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.248 08:17:40 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57550 00:04:53.248 08:17:40 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:53.248 08:17:40 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57550 00:04:53.248 08:17:40 spdkcli_tcp -- common/autotest_common.sh@838 -- # '[' -z 57550 ']' 00:04:53.248 08:17:40 spdkcli_tcp -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.248 08:17:40 spdkcli_tcp -- common/autotest_common.sh@843 -- # local max_retries=100 00:04:53.248 08:17:40 spdkcli_tcp -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.248 08:17:40 spdkcli_tcp -- common/autotest_common.sh@847 -- # xtrace_disable 00:04:53.248 08:17:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.248 [2024-11-20 08:17:40.623557] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:04:53.248 [2024-11-20 08:17:40.623920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57550 ] 00:04:53.248 [2024-11-20 08:17:40.767176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.507 [2024-11-20 08:17:40.832389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.507 [2024-11-20 08:17:40.832397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.507 [2024-11-20 08:17:40.903830] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:54.444 08:17:41 spdkcli_tcp -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:04:54.444 08:17:41 spdkcli_tcp -- common/autotest_common.sh@871 -- # return 0 00:04:54.444 08:17:41 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57567 00:04:54.444 08:17:41 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:54.444 08:17:41 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:54.444 [ 00:04:54.444 "bdev_malloc_delete", 00:04:54.444 "bdev_malloc_create", 00:04:54.444 "bdev_null_resize", 00:04:54.444 "bdev_null_delete", 00:04:54.444 "bdev_null_create", 00:04:54.444 "bdev_nvme_cuse_unregister", 00:04:54.444 "bdev_nvme_cuse_register", 00:04:54.444 "bdev_opal_new_user", 00:04:54.444 "bdev_opal_set_lock_state", 00:04:54.444 "bdev_opal_delete", 00:04:54.444 "bdev_opal_get_info", 00:04:54.444 "bdev_opal_create", 00:04:54.444 "bdev_nvme_opal_revert", 00:04:54.444 "bdev_nvme_opal_init", 00:04:54.444 "bdev_nvme_send_cmd", 00:04:54.444 "bdev_nvme_set_keys", 00:04:54.444 "bdev_nvme_get_path_iostat", 00:04:54.444 "bdev_nvme_get_mdns_discovery_info", 00:04:54.444 "bdev_nvme_stop_mdns_discovery", 00:04:54.444 "bdev_nvme_start_mdns_discovery", 00:04:54.444 "bdev_nvme_set_multipath_policy", 00:04:54.444 "bdev_nvme_set_preferred_path", 00:04:54.444 "bdev_nvme_get_io_paths", 00:04:54.444 "bdev_nvme_remove_error_injection", 00:04:54.444 "bdev_nvme_add_error_injection", 00:04:54.444 "bdev_nvme_get_discovery_info", 00:04:54.444 "bdev_nvme_stop_discovery", 00:04:54.444 "bdev_nvme_start_discovery", 00:04:54.444 "bdev_nvme_get_controller_health_info", 00:04:54.444 "bdev_nvme_disable_controller", 00:04:54.444 "bdev_nvme_enable_controller", 00:04:54.444 "bdev_nvme_reset_controller", 00:04:54.444 "bdev_nvme_get_transport_statistics", 00:04:54.444 "bdev_nvme_apply_firmware", 00:04:54.444 "bdev_nvme_detach_controller", 00:04:54.444 "bdev_nvme_get_controllers", 00:04:54.444 "bdev_nvme_attach_controller", 00:04:54.444 "bdev_nvme_set_hotplug", 00:04:54.444 "bdev_nvme_set_options", 00:04:54.444 "bdev_passthru_delete", 00:04:54.444 "bdev_passthru_create", 00:04:54.444 "bdev_lvol_set_parent_bdev", 00:04:54.444 "bdev_lvol_set_parent", 00:04:54.444 "bdev_lvol_check_shallow_copy", 00:04:54.444 "bdev_lvol_start_shallow_copy", 00:04:54.444 "bdev_lvol_grow_lvstore", 00:04:54.444 "bdev_lvol_get_lvols", 00:04:54.444 "bdev_lvol_get_lvstores", 00:04:54.444 "bdev_lvol_delete", 00:04:54.444 "bdev_lvol_set_read_only", 00:04:54.444 "bdev_lvol_resize", 00:04:54.444 "bdev_lvol_decouple_parent", 00:04:54.444 "bdev_lvol_inflate", 00:04:54.444 "bdev_lvol_rename", 00:04:54.444 "bdev_lvol_clone_bdev", 00:04:54.444 "bdev_lvol_clone", 00:04:54.444 "bdev_lvol_snapshot", 00:04:54.444 "bdev_lvol_create", 00:04:54.444 "bdev_lvol_delete_lvstore", 00:04:54.444 "bdev_lvol_rename_lvstore", 00:04:54.444 "bdev_lvol_create_lvstore", 00:04:54.444 "bdev_raid_set_options", 00:04:54.444 "bdev_raid_remove_base_bdev", 00:04:54.444 "bdev_raid_add_base_bdev", 00:04:54.444 "bdev_raid_delete", 00:04:54.444 "bdev_raid_create", 00:04:54.444 "bdev_raid_get_bdevs", 00:04:54.444 "bdev_error_inject_error", 00:04:54.444 "bdev_error_delete", 00:04:54.444 "bdev_error_create", 00:04:54.444 "bdev_split_delete", 00:04:54.444 "bdev_split_create", 00:04:54.444 "bdev_delay_delete", 00:04:54.444 "bdev_delay_create", 00:04:54.444 "bdev_delay_update_latency", 00:04:54.444 "bdev_zone_block_delete", 00:04:54.444 "bdev_zone_block_create", 00:04:54.444 "blobfs_create", 00:04:54.444 "blobfs_detect", 00:04:54.444 "blobfs_set_cache_size", 00:04:54.444 "bdev_aio_delete", 00:04:54.444 "bdev_aio_rescan", 00:04:54.444 "bdev_aio_create", 00:04:54.444 "bdev_ftl_set_property", 00:04:54.444 "bdev_ftl_get_properties", 00:04:54.444 "bdev_ftl_get_stats", 00:04:54.444 "bdev_ftl_unmap", 00:04:54.444 "bdev_ftl_unload", 00:04:54.444 "bdev_ftl_delete", 00:04:54.444 "bdev_ftl_load", 00:04:54.444 "bdev_ftl_create", 00:04:54.444 "bdev_virtio_attach_controller", 00:04:54.444 "bdev_virtio_scsi_get_devices", 00:04:54.444 "bdev_virtio_detach_controller", 00:04:54.444 "bdev_virtio_blk_set_hotplug", 00:04:54.444 "bdev_iscsi_delete", 00:04:54.444 "bdev_iscsi_create", 00:04:54.444 "bdev_iscsi_set_options", 00:04:54.444 "bdev_uring_delete", 00:04:54.444 "bdev_uring_rescan", 00:04:54.444 "bdev_uring_create", 00:04:54.444 "accel_error_inject_error", 00:04:54.444 "ioat_scan_accel_module", 00:04:54.444 "dsa_scan_accel_module", 00:04:54.444 "iaa_scan_accel_module", 00:04:54.444 "keyring_file_remove_key", 00:04:54.444 "keyring_file_add_key", 00:04:54.444 "keyring_linux_set_options", 00:04:54.444 "fsdev_aio_delete", 00:04:54.444 "fsdev_aio_create", 00:04:54.444 "iscsi_get_histogram", 00:04:54.444 "iscsi_enable_histogram", 00:04:54.444 "iscsi_set_options", 00:04:54.444 "iscsi_get_auth_groups", 00:04:54.444 "iscsi_auth_group_remove_secret", 00:04:54.444 "iscsi_auth_group_add_secret", 00:04:54.444 "iscsi_delete_auth_group", 00:04:54.444 "iscsi_create_auth_group", 00:04:54.444 "iscsi_set_discovery_auth", 00:04:54.444 "iscsi_get_options", 00:04:54.444 "iscsi_target_node_request_logout", 00:04:54.444 "iscsi_target_node_set_redirect", 00:04:54.444 "iscsi_target_node_set_auth", 00:04:54.444 "iscsi_target_node_add_lun", 00:04:54.444 "iscsi_get_stats", 00:04:54.444 "iscsi_get_connections", 00:04:54.444 "iscsi_portal_group_set_auth", 00:04:54.444 "iscsi_start_portal_group", 00:04:54.444 "iscsi_delete_portal_group", 00:04:54.444 "iscsi_create_portal_group", 00:04:54.444 "iscsi_get_portal_groups", 00:04:54.444 "iscsi_delete_target_node", 00:04:54.444 "iscsi_target_node_remove_pg_ig_maps", 00:04:54.444 "iscsi_target_node_add_pg_ig_maps", 00:04:54.444 "iscsi_create_target_node", 00:04:54.444 "iscsi_get_target_nodes", 00:04:54.444 "iscsi_delete_initiator_group", 00:04:54.444 "iscsi_initiator_group_remove_initiators", 00:04:54.444 "iscsi_initiator_group_add_initiators", 00:04:54.444 "iscsi_create_initiator_group", 00:04:54.444 "iscsi_get_initiator_groups", 00:04:54.444 "nvmf_set_crdt", 00:04:54.444 "nvmf_set_config", 00:04:54.444 "nvmf_set_max_subsystems", 00:04:54.444 "nvmf_stop_mdns_prr", 00:04:54.444 "nvmf_publish_mdns_prr", 00:04:54.444 "nvmf_subsystem_get_listeners", 00:04:54.444 "nvmf_subsystem_get_qpairs", 00:04:54.444 "nvmf_subsystem_get_controllers", 00:04:54.444 "nvmf_get_stats", 00:04:54.444 "nvmf_get_transports", 00:04:54.444 "nvmf_create_transport", 00:04:54.444 "nvmf_get_targets", 00:04:54.444 "nvmf_delete_target", 00:04:54.444 "nvmf_create_target", 00:04:54.444 "nvmf_subsystem_allow_any_host", 00:04:54.444 "nvmf_subsystem_set_keys", 00:04:54.444 "nvmf_subsystem_remove_host", 00:04:54.444 "nvmf_subsystem_add_host", 00:04:54.444 "nvmf_ns_remove_host", 00:04:54.444 "nvmf_ns_add_host", 00:04:54.444 "nvmf_subsystem_remove_ns", 00:04:54.444 "nvmf_subsystem_set_ns_ana_group", 00:04:54.444 "nvmf_subsystem_add_ns", 00:04:54.444 "nvmf_subsystem_listener_set_ana_state", 00:04:54.444 "nvmf_discovery_get_referrals", 00:04:54.444 "nvmf_discovery_remove_referral", 00:04:54.444 "nvmf_discovery_add_referral", 00:04:54.444 "nvmf_subsystem_remove_listener", 00:04:54.444 "nvmf_subsystem_add_listener", 00:04:54.444 "nvmf_delete_subsystem", 00:04:54.444 "nvmf_create_subsystem", 00:04:54.444 "nvmf_get_subsystems", 00:04:54.444 "env_dpdk_get_mem_stats", 00:04:54.444 "nbd_get_disks", 00:04:54.444 "nbd_stop_disk", 00:04:54.444 "nbd_start_disk", 00:04:54.444 "ublk_recover_disk", 00:04:54.444 "ublk_get_disks", 00:04:54.444 "ublk_stop_disk", 00:04:54.444 "ublk_start_disk", 00:04:54.444 "ublk_destroy_target", 00:04:54.444 "ublk_create_target", 00:04:54.444 "virtio_blk_create_transport", 00:04:54.444 "virtio_blk_get_transports", 00:04:54.444 "vhost_controller_set_coalescing", 00:04:54.444 "vhost_get_controllers", 00:04:54.444 "vhost_delete_controller", 00:04:54.444 "vhost_create_blk_controller", 00:04:54.444 "vhost_scsi_controller_remove_target", 00:04:54.444 "vhost_scsi_controller_add_target", 00:04:54.444 "vhost_start_scsi_controller", 00:04:54.444 "vhost_create_scsi_controller", 00:04:54.444 "thread_set_cpumask", 00:04:54.444 "scheduler_set_options", 00:04:54.444 "framework_get_governor", 00:04:54.444 "framework_get_scheduler", 00:04:54.444 "framework_set_scheduler", 00:04:54.444 "framework_get_reactors", 00:04:54.444 "thread_get_io_channels", 00:04:54.444 "thread_get_pollers", 00:04:54.444 "thread_get_stats", 00:04:54.444 "framework_monitor_context_switch", 00:04:54.444 "spdk_kill_instance", 00:04:54.444 "log_enable_timestamps", 00:04:54.444 "log_get_flags", 00:04:54.444 "log_clear_flag", 00:04:54.444 "log_set_flag", 00:04:54.444 "log_get_level", 00:04:54.444 "log_set_level", 00:04:54.444 "log_get_print_level", 00:04:54.444 "log_set_print_level", 00:04:54.445 "framework_enable_cpumask_locks", 00:04:54.445 "framework_disable_cpumask_locks", 00:04:54.445 "framework_wait_init", 00:04:54.445 "framework_start_init", 00:04:54.445 "scsi_get_devices", 00:04:54.445 "bdev_get_histogram", 00:04:54.445 "bdev_enable_histogram", 00:04:54.445 "bdev_set_qos_limit", 00:04:54.445 "bdev_set_qd_sampling_period", 00:04:54.445 "bdev_get_bdevs", 00:04:54.445 "bdev_reset_iostat", 00:04:54.445 "bdev_get_iostat", 00:04:54.445 "bdev_examine", 00:04:54.445 "bdev_wait_for_examine", 00:04:54.445 "bdev_set_options", 00:04:54.445 "accel_get_stats", 00:04:54.445 "accel_set_options", 00:04:54.445 "accel_set_driver", 00:04:54.445 "accel_crypto_key_destroy", 00:04:54.445 "accel_crypto_keys_get", 00:04:54.445 "accel_crypto_key_create", 00:04:54.445 "accel_assign_opc", 00:04:54.445 "accel_get_module_info", 00:04:54.445 "accel_get_opc_assignments", 00:04:54.445 "vmd_rescan", 00:04:54.445 "vmd_remove_device", 00:04:54.445 "vmd_enable", 00:04:54.445 "sock_get_default_impl", 00:04:54.445 "sock_set_default_impl", 00:04:54.445 "sock_impl_set_options", 00:04:54.445 "sock_impl_get_options", 00:04:54.445 "iobuf_get_stats", 00:04:54.445 "iobuf_set_options", 00:04:54.445 "keyring_get_keys", 00:04:54.445 "framework_get_pci_devices", 00:04:54.445 "framework_get_config", 00:04:54.445 "framework_get_subsystems", 00:04:54.445 "fsdev_set_opts", 00:04:54.445 "fsdev_get_opts", 00:04:54.445 "trace_get_info", 00:04:54.445 "trace_get_tpoint_group_mask", 00:04:54.445 "trace_disable_tpoint_group", 00:04:54.445 "trace_enable_tpoint_group", 00:04:54.445 "trace_clear_tpoint_mask", 00:04:54.445 "trace_set_tpoint_mask", 00:04:54.445 "notify_get_notifications", 00:04:54.445 "notify_get_types", 00:04:54.445 "spdk_get_version", 00:04:54.445 "rpc_get_methods" 00:04:54.445 ] 00:04:54.445 08:17:41 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:54.445 08:17:41 spdkcli_tcp -- common/autotest_common.sh@735 -- # xtrace_disable 00:04:54.445 08:17:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.445 08:17:41 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:54.445 08:17:41 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57550 00:04:54.445 08:17:41 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' -z 57550 ']' 00:04:54.445 08:17:41 spdkcli_tcp -- common/autotest_common.sh@961 -- # kill -0 57550 00:04:54.445 08:17:41 spdkcli_tcp -- common/autotest_common.sh@962 -- # uname 00:04:54.445 08:17:41 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:04:54.445 08:17:41 spdkcli_tcp -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 57550 00:04:54.704 killing process with pid 57550 00:04:54.704 08:17:42 spdkcli_tcp -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:04:54.704 08:17:42 spdkcli_tcp -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:04:54.704 08:17:42 spdkcli_tcp -- common/autotest_common.sh@975 -- # echo 'killing process with pid 57550' 00:04:54.704 08:17:42 spdkcli_tcp -- common/autotest_common.sh@976 -- # kill 57550 00:04:54.704 08:17:42 spdkcli_tcp -- common/autotest_common.sh@981 -- # wait 57550 00:04:54.962 ************************************ 00:04:54.962 END TEST spdkcli_tcp 00:04:54.962 ************************************ 00:04:54.962 00:04:54.962 real 0m2.018s 00:04:54.962 user 0m3.787s 00:04:54.962 sys 0m0.524s 00:04:54.962 08:17:42 spdkcli_tcp -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:54.962 08:17:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.962 08:17:42 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:54.962 08:17:42 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:04:54.962 08:17:42 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:54.962 08:17:42 -- common/autotest_common.sh@10 -- # set +x 00:04:54.962 ************************************ 00:04:54.962 START TEST dpdk_mem_utility 00:04:54.962 ************************************ 00:04:54.962 08:17:42 dpdk_mem_utility -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:54.962 * Looking for test storage... 00:04:54.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:54.962 08:17:42 dpdk_mem_utility -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:04:54.962 08:17:42 dpdk_mem_utility -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:04:54.962 08:17:42 dpdk_mem_utility -- common/autotest_common.sh@1638 -- # lcov --version 00:04:55.220 08:17:42 dpdk_mem_utility -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:04:55.220 08:17:42 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.220 08:17:42 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.220 08:17:42 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.220 08:17:42 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.220 08:17:42 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.220 08:17:42 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.220 08:17:42 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.220 08:17:42 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.220 08:17:42 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.220 08:17:42 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.221 08:17:42 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.221 08:17:42 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:55.221 08:17:42 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:55.221 08:17:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.221 08:17:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.221 08:17:42 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:55.221 08:17:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:55.221 08:17:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.221 08:17:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:55.221 08:17:42 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.221 08:17:42 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:55.221 08:17:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:55.221 08:17:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.221 08:17:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:55.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.221 08:17:42 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.221 08:17:42 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.221 08:17:42 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.221 08:17:42 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:55.221 08:17:42 dpdk_mem_utility -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.221 08:17:42 dpdk_mem_utility -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:04:55.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.221 --rc genhtml_branch_coverage=1 00:04:55.221 --rc genhtml_function_coverage=1 00:04:55.221 --rc genhtml_legend=1 00:04:55.221 --rc geninfo_all_blocks=1 00:04:55.221 --rc geninfo_unexecuted_blocks=1 00:04:55.221 00:04:55.221 ' 00:04:55.221 08:17:42 dpdk_mem_utility -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:04:55.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.221 --rc genhtml_branch_coverage=1 00:04:55.221 --rc genhtml_function_coverage=1 00:04:55.221 --rc genhtml_legend=1 00:04:55.221 --rc geninfo_all_blocks=1 00:04:55.221 --rc geninfo_unexecuted_blocks=1 00:04:55.221 00:04:55.221 ' 00:04:55.221 08:17:42 dpdk_mem_utility -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:04:55.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.221 --rc genhtml_branch_coverage=1 00:04:55.221 --rc genhtml_function_coverage=1 00:04:55.221 --rc genhtml_legend=1 00:04:55.221 --rc geninfo_all_blocks=1 00:04:55.221 --rc geninfo_unexecuted_blocks=1 00:04:55.221 00:04:55.221 ' 00:04:55.221 08:17:42 dpdk_mem_utility -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:04:55.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.221 --rc genhtml_branch_coverage=1 00:04:55.221 --rc genhtml_function_coverage=1 00:04:55.221 --rc genhtml_legend=1 00:04:55.221 --rc geninfo_all_blocks=1 00:04:55.221 --rc geninfo_unexecuted_blocks=1 00:04:55.221 00:04:55.221 ' 00:04:55.221 08:17:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:55.221 08:17:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57655 00:04:55.221 08:17:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57655 00:04:55.221 08:17:42 dpdk_mem_utility -- common/autotest_common.sh@838 -- # '[' -z 57655 ']' 00:04:55.221 08:17:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.221 08:17:42 dpdk_mem_utility -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.221 08:17:42 dpdk_mem_utility -- common/autotest_common.sh@843 -- # local max_retries=100 00:04:55.221 08:17:42 dpdk_mem_utility -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.221 08:17:42 dpdk_mem_utility -- common/autotest_common.sh@847 -- # xtrace_disable 00:04:55.221 08:17:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:55.221 [2024-11-20 08:17:42.718065] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:04:55.221 [2024-11-20 08:17:42.718678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57655 ] 00:04:55.479 [2024-11-20 08:17:42.864186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.479 [2024-11-20 08:17:42.926544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.479 [2024-11-20 08:17:42.999458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:55.737 08:17:43 dpdk_mem_utility -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:04:55.737 08:17:43 dpdk_mem_utility -- common/autotest_common.sh@871 -- # return 0 00:04:55.737 08:17:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:55.737 08:17:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:55.737 08:17:43 dpdk_mem_utility -- common/autotest_common.sh@566 -- # xtrace_disable 00:04:55.737 08:17:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:55.737 { 00:04:55.737 "filename": "/tmp/spdk_mem_dump.txt" 00:04:55.737 } 00:04:55.737 08:17:43 dpdk_mem_utility -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:04:55.737 08:17:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:55.737 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:55.737 1 heaps totaling size 810.000000 MiB 00:04:55.737 size: 810.000000 MiB heap id: 0 00:04:55.737 end heaps---------- 00:04:55.737 9 mempools totaling size 595.772034 MiB 00:04:55.737 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:55.737 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:55.737 size: 92.545471 MiB name: bdev_io_57655 00:04:55.737 size: 50.003479 MiB name: msgpool_57655 00:04:55.737 size: 36.509338 MiB name: fsdev_io_57655 00:04:55.737 size: 21.763794 MiB name: PDU_Pool 00:04:55.737 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:55.737 size: 4.133484 MiB name: evtpool_57655 00:04:55.737 size: 0.026123 MiB name: Session_Pool 00:04:55.737 end mempools------- 00:04:55.737 6 memzones totaling size 4.142822 MiB 00:04:55.737 size: 1.000366 MiB name: RG_ring_0_57655 00:04:55.737 size: 1.000366 MiB name: RG_ring_1_57655 00:04:55.737 size: 1.000366 MiB name: RG_ring_4_57655 00:04:55.737 size: 1.000366 MiB name: RG_ring_5_57655 00:04:55.737 size: 0.125366 MiB name: RG_ring_2_57655 00:04:55.737 size: 0.015991 MiB name: RG_ring_3_57655 00:04:55.737 end memzones------- 00:04:55.737 08:17:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:55.997 heap id: 0 total size: 810.000000 MiB number of busy elements: 291 number of free elements: 15 00:04:55.997 list of free elements. size: 10.817261 MiB 00:04:55.997 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:55.997 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:55.997 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:55.997 element at address: 0x200000400000 with size: 0.993958 MiB 00:04:55.997 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:55.997 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:55.997 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:55.997 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:55.997 element at address: 0x20001a600000 with size: 0.570251 MiB 00:04:55.997 element at address: 0x20000a600000 with size: 0.488892 MiB 00:04:55.997 element at address: 0x200000c00000 with size: 0.487000 MiB 00:04:55.997 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:55.997 element at address: 0x200003e00000 with size: 0.480286 MiB 00:04:55.997 element at address: 0x200027a00000 with size: 0.397217 MiB 00:04:55.997 element at address: 0x200000800000 with size: 0.351746 MiB 00:04:55.997 list of standard malloc elements. size: 199.263855 MiB 00:04:55.997 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:55.997 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:55.997 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:55.997 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:55.997 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:55.997 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:55.997 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:55.997 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:55.997 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:55.997 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:55.997 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:04:55.997 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:04:55.997 element at address: 0x20000085e580 with size: 0.000183 MiB 00:04:55.997 element at address: 0x20000087e840 with size: 0.000183 MiB 00:04:55.997 element at address: 0x20000087e900 with size: 0.000183 MiB 00:04:55.997 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:04:55.997 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:04:55.997 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:04:55.997 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:04:55.997 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:04:55.997 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:04:55.997 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:04:55.997 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:04:55.997 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:04:55.997 element at address: 0x20000087f080 with size: 0.000183 MiB 00:04:55.997 element at address: 0x20000087f140 with size: 0.000183 MiB 00:04:55.997 element at address: 0x20000087f200 with size: 0.000183 MiB 00:04:55.997 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:04:55.997 element at address: 0x20000087f380 with size: 0.000183 MiB 00:04:55.997 element at address: 0x20000087f440 with size: 0.000183 MiB 00:04:55.997 element at address: 0x20000087f500 with size: 0.000183 MiB 00:04:55.997 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:55.997 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:55.997 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:55.997 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:04:55.997 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:04:55.997 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:04:55.997 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:04:55.997 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:04:55.997 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:04:55.997 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:04:55.997 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:04:55.997 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:04:55.997 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:55.998 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:55.998 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:55.998 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a692080 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a692140 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a692200 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a692380 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a692440 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a692500 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a692680 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a692740 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a692800 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a692980 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a693040 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a693100 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a693280 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a693340 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a693400 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a693580 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a693640 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a693700 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a693880 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a693940 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a694000 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a694180 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a694240 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a694300 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a694480 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a694540 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a694600 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a694780 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a694840 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a694900 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:04:55.998 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:04:55.999 element at address: 0x20001a695080 with size: 0.000183 MiB 00:04:55.999 element at address: 0x20001a695140 with size: 0.000183 MiB 00:04:55.999 element at address: 0x20001a695200 with size: 0.000183 MiB 00:04:55.999 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:04:55.999 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:55.999 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a65b00 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a65bc0 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6c7c0 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:55.999 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:55.999 list of memzone associated elements. size: 599.918884 MiB 00:04:55.999 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:55.999 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:55.999 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:55.999 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:55.999 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:55.999 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57655_0 00:04:55.999 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:55.999 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57655_0 00:04:55.999 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:55.999 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57655_0 00:04:55.999 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:55.999 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:55.999 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:55.999 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:55.999 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:55.999 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57655_0 00:04:55.999 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:55.999 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57655 00:04:55.999 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:55.999 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57655 00:04:55.999 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:55.999 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:55.999 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:55.999 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:55.999 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:55.999 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:55.999 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:55.999 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:55.999 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:55.999 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57655 00:04:55.999 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:55.999 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57655 00:04:55.999 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:55.999 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57655 00:04:55.999 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:56.000 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57655 00:04:56.000 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:56.000 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57655 00:04:56.000 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:56.000 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57655 00:04:56.000 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:56.000 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:56.000 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:56.000 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:56.000 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:56.000 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:56.000 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:56.000 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57655 00:04:56.000 element at address: 0x20000085e640 with size: 0.125488 MiB 00:04:56.000 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57655 00:04:56.000 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:56.000 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:56.000 element at address: 0x200027a65c80 with size: 0.023743 MiB 00:04:56.000 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:56.000 element at address: 0x20000085a380 with size: 0.016113 MiB 00:04:56.000 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57655 00:04:56.000 element at address: 0x200027a6bdc0 with size: 0.002441 MiB 00:04:56.000 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:56.000 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:04:56.000 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57655 00:04:56.000 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:56.000 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57655 00:04:56.000 element at address: 0x20000085a180 with size: 0.000305 MiB 00:04:56.000 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57655 00:04:56.000 element at address: 0x200027a6c880 with size: 0.000305 MiB 00:04:56.000 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:56.000 08:17:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:56.000 08:17:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57655 00:04:56.000 08:17:43 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' -z 57655 ']' 00:04:56.000 08:17:43 dpdk_mem_utility -- common/autotest_common.sh@961 -- # kill -0 57655 00:04:56.000 08:17:43 dpdk_mem_utility -- common/autotest_common.sh@962 -- # uname 00:04:56.000 08:17:43 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:04:56.000 08:17:43 dpdk_mem_utility -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 57655 00:04:56.000 killing process with pid 57655 00:04:56.000 08:17:43 dpdk_mem_utility -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:04:56.000 08:17:43 dpdk_mem_utility -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:04:56.000 08:17:43 dpdk_mem_utility -- common/autotest_common.sh@975 -- # echo 'killing process with pid 57655' 00:04:56.000 08:17:43 dpdk_mem_utility -- common/autotest_common.sh@976 -- # kill 57655 00:04:56.000 08:17:43 dpdk_mem_utility -- common/autotest_common.sh@981 -- # wait 57655 00:04:56.258 ************************************ 00:04:56.258 END TEST dpdk_mem_utility 00:04:56.258 ************************************ 00:04:56.258 00:04:56.258 real 0m1.309s 00:04:56.258 user 0m1.257s 00:04:56.258 sys 0m0.406s 00:04:56.258 08:17:43 dpdk_mem_utility -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:56.258 08:17:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:56.258 08:17:43 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:56.258 08:17:43 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:04:56.258 08:17:43 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:56.258 08:17:43 -- common/autotest_common.sh@10 -- # set +x 00:04:56.258 ************************************ 00:04:56.258 START TEST event 00:04:56.258 ************************************ 00:04:56.258 08:17:43 event -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:56.517 * Looking for test storage... 00:04:56.517 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:56.517 08:17:43 event -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:04:56.517 08:17:43 event -- common/autotest_common.sh@1638 -- # lcov --version 00:04:56.517 08:17:43 event -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:04:56.517 08:17:43 event -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:04:56.517 08:17:43 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.517 08:17:43 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.517 08:17:43 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.517 08:17:43 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.517 08:17:43 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.517 08:17:43 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.517 08:17:43 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.517 08:17:43 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.517 08:17:43 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.517 08:17:43 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.517 08:17:43 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.517 08:17:43 event -- scripts/common.sh@344 -- # case "$op" in 00:04:56.517 08:17:43 event -- scripts/common.sh@345 -- # : 1 00:04:56.517 08:17:43 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.517 08:17:43 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.517 08:17:43 event -- scripts/common.sh@365 -- # decimal 1 00:04:56.517 08:17:43 event -- scripts/common.sh@353 -- # local d=1 00:04:56.517 08:17:43 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.517 08:17:43 event -- scripts/common.sh@355 -- # echo 1 00:04:56.517 08:17:43 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.517 08:17:43 event -- scripts/common.sh@366 -- # decimal 2 00:04:56.517 08:17:43 event -- scripts/common.sh@353 -- # local d=2 00:04:56.517 08:17:43 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.517 08:17:43 event -- scripts/common.sh@355 -- # echo 2 00:04:56.517 08:17:43 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.517 08:17:43 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.517 08:17:43 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.517 08:17:43 event -- scripts/common.sh@368 -- # return 0 00:04:56.517 08:17:43 event -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.517 08:17:43 event -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:04:56.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.517 --rc genhtml_branch_coverage=1 00:04:56.517 --rc genhtml_function_coverage=1 00:04:56.517 --rc genhtml_legend=1 00:04:56.517 --rc geninfo_all_blocks=1 00:04:56.517 --rc geninfo_unexecuted_blocks=1 00:04:56.517 00:04:56.517 ' 00:04:56.517 08:17:43 event -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:04:56.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.517 --rc genhtml_branch_coverage=1 00:04:56.517 --rc genhtml_function_coverage=1 00:04:56.517 --rc genhtml_legend=1 00:04:56.517 --rc geninfo_all_blocks=1 00:04:56.517 --rc geninfo_unexecuted_blocks=1 00:04:56.517 00:04:56.517 ' 00:04:56.517 08:17:43 event -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:04:56.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.517 --rc genhtml_branch_coverage=1 00:04:56.517 --rc genhtml_function_coverage=1 00:04:56.517 --rc genhtml_legend=1 00:04:56.517 --rc geninfo_all_blocks=1 00:04:56.517 --rc geninfo_unexecuted_blocks=1 00:04:56.517 00:04:56.517 ' 00:04:56.517 08:17:43 event -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:04:56.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.517 --rc genhtml_branch_coverage=1 00:04:56.517 --rc genhtml_function_coverage=1 00:04:56.517 --rc genhtml_legend=1 00:04:56.517 --rc geninfo_all_blocks=1 00:04:56.517 --rc geninfo_unexecuted_blocks=1 00:04:56.517 00:04:56.517 ' 00:04:56.517 08:17:43 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:56.517 08:17:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:56.517 08:17:43 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:56.517 08:17:43 event -- common/autotest_common.sh@1108 -- # '[' 6 -le 1 ']' 00:04:56.517 08:17:43 event -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:56.517 08:17:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.517 ************************************ 00:04:56.517 START TEST event_perf 00:04:56.517 ************************************ 00:04:56.517 08:17:44 event.event_perf -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:56.517 Running I/O for 1 seconds...[2024-11-20 08:17:44.020961] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:04:56.517 [2024-11-20 08:17:44.021193] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57733 ] 00:04:56.775 [2024-11-20 08:17:44.168366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:56.775 [2024-11-20 08:17:44.234410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.775 [2024-11-20 08:17:44.234552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.775 [2024-11-20 08:17:44.234672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:56.775 [2024-11-20 08:17:44.234673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.771 Running I/O for 1 seconds... 00:04:57.771 lcore 0: 200552 00:04:57.771 lcore 1: 200551 00:04:57.771 lcore 2: 200553 00:04:57.771 lcore 3: 200554 00:04:57.771 done. 00:04:57.771 00:04:57.771 real 0m1.287s 00:04:57.771 user 0m4.111s 00:04:57.771 sys 0m0.057s 00:04:57.771 08:17:45 event.event_perf -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:57.771 ************************************ 00:04:57.771 END TEST event_perf 00:04:57.771 ************************************ 00:04:57.771 08:17:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:57.771 08:17:45 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:57.771 08:17:45 event -- common/autotest_common.sh@1108 -- # '[' 4 -le 1 ']' 00:04:57.771 08:17:45 event -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:57.771 08:17:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.028 ************************************ 00:04:58.028 START TEST event_reactor 00:04:58.028 ************************************ 00:04:58.028 08:17:45 event.event_reactor -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:58.028 [2024-11-20 08:17:45.350692] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:04:58.029 [2024-11-20 08:17:45.350778] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57766 ] 00:04:58.029 [2024-11-20 08:17:45.490123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.029 [2024-11-20 08:17:45.552116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.402 test_start 00:04:59.402 oneshot 00:04:59.402 tick 100 00:04:59.402 tick 100 00:04:59.402 tick 250 00:04:59.402 tick 100 00:04:59.402 tick 100 00:04:59.402 tick 250 00:04:59.402 tick 100 00:04:59.402 tick 500 00:04:59.402 tick 100 00:04:59.402 tick 100 00:04:59.402 tick 250 00:04:59.402 tick 100 00:04:59.402 tick 100 00:04:59.402 test_end 00:04:59.402 00:04:59.402 real 0m1.263s 00:04:59.402 user 0m1.119s 00:04:59.402 sys 0m0.037s 00:04:59.402 08:17:46 event.event_reactor -- common/autotest_common.sh@1133 -- # xtrace_disable 00:04:59.402 ************************************ 00:04:59.402 END TEST event_reactor 00:04:59.402 ************************************ 00:04:59.402 08:17:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:59.402 08:17:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:59.402 08:17:46 event -- common/autotest_common.sh@1108 -- # '[' 4 -le 1 ']' 00:04:59.402 08:17:46 event -- common/autotest_common.sh@1114 -- # xtrace_disable 00:04:59.402 08:17:46 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.402 ************************************ 00:04:59.402 START TEST event_reactor_perf 00:04:59.402 ************************************ 00:04:59.402 08:17:46 event.event_reactor_perf -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:59.402 [2024-11-20 08:17:46.667010] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:04:59.402 [2024-11-20 08:17:46.667096] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57807 ] 00:04:59.402 [2024-11-20 08:17:46.819680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.402 [2024-11-20 08:17:46.890023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.777 test_start 00:05:00.777 test_end 00:05:00.777 Performance: 367872 events per second 00:05:00.777 00:05:00.777 real 0m1.292s 00:05:00.777 user 0m1.144s 00:05:00.777 sys 0m0.041s 00:05:00.777 ************************************ 00:05:00.777 END TEST event_reactor_perf 00:05:00.777 ************************************ 00:05:00.777 08:17:47 event.event_reactor_perf -- common/autotest_common.sh@1133 -- # xtrace_disable 00:05:00.777 08:17:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:00.777 08:17:47 event -- event/event.sh@49 -- # uname -s 00:05:00.777 08:17:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:00.777 08:17:47 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:00.777 08:17:47 event -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:05:00.777 08:17:47 event -- common/autotest_common.sh@1114 -- # xtrace_disable 00:05:00.777 08:17:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.777 ************************************ 00:05:00.777 START TEST event_scheduler 00:05:00.777 ************************************ 00:05:00.777 08:17:47 event.event_scheduler -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:00.777 * Looking for test storage... 00:05:00.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:00.777 08:17:48 event.event_scheduler -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:05:00.777 08:17:48 event.event_scheduler -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:05:00.777 08:17:48 event.event_scheduler -- common/autotest_common.sh@1638 -- # lcov --version 00:05:00.777 08:17:48 event.event_scheduler -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.777 08:17:48 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:00.778 08:17:48 event.event_scheduler -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.778 08:17:48 event.event_scheduler -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:05:00.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.778 --rc genhtml_branch_coverage=1 00:05:00.778 --rc genhtml_function_coverage=1 00:05:00.778 --rc genhtml_legend=1 00:05:00.778 --rc geninfo_all_blocks=1 00:05:00.778 --rc geninfo_unexecuted_blocks=1 00:05:00.778 00:05:00.778 ' 00:05:00.778 08:17:48 event.event_scheduler -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:05:00.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.778 --rc genhtml_branch_coverage=1 00:05:00.778 --rc genhtml_function_coverage=1 00:05:00.778 --rc genhtml_legend=1 00:05:00.778 --rc geninfo_all_blocks=1 00:05:00.778 --rc geninfo_unexecuted_blocks=1 00:05:00.778 00:05:00.778 ' 00:05:00.778 08:17:48 event.event_scheduler -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:05:00.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.778 --rc genhtml_branch_coverage=1 00:05:00.778 --rc genhtml_function_coverage=1 00:05:00.778 --rc genhtml_legend=1 00:05:00.778 --rc geninfo_all_blocks=1 00:05:00.778 --rc geninfo_unexecuted_blocks=1 00:05:00.778 00:05:00.778 ' 00:05:00.778 08:17:48 event.event_scheduler -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:05:00.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.778 --rc genhtml_branch_coverage=1 00:05:00.778 --rc genhtml_function_coverage=1 00:05:00.778 --rc genhtml_legend=1 00:05:00.778 --rc geninfo_all_blocks=1 00:05:00.778 --rc geninfo_unexecuted_blocks=1 00:05:00.778 00:05:00.778 ' 00:05:00.778 08:17:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:00.778 08:17:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=57877 00:05:00.778 08:17:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:00.778 08:17:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.778 08:17:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 57877 00:05:00.778 08:17:48 event.event_scheduler -- common/autotest_common.sh@838 -- # '[' -z 57877 ']' 00:05:00.778 08:17:48 event.event_scheduler -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.778 08:17:48 event.event_scheduler -- common/autotest_common.sh@843 -- # local max_retries=100 00:05:00.778 08:17:48 event.event_scheduler -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.778 08:17:48 event.event_scheduler -- common/autotest_common.sh@847 -- # xtrace_disable 00:05:00.778 08:17:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.778 [2024-11-20 08:17:48.271750] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:00.778 [2024-11-20 08:17:48.271888] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57877 ] 00:05:01.035 [2024-11-20 08:17:48.422634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:01.035 [2024-11-20 08:17:48.492224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.035 [2024-11-20 08:17:48.492349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.035 [2024-11-20 08:17:48.492418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:01.035 [2024-11-20 08:17:48.492421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.035 08:17:48 event.event_scheduler -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:05:01.035 08:17:48 event.event_scheduler -- common/autotest_common.sh@871 -- # return 0 00:05:01.036 08:17:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:01.036 08:17:48 event.event_scheduler -- common/autotest_common.sh@566 -- # xtrace_disable 00:05:01.036 08:17:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:01.036 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:01.036 POWER: Cannot set governor of lcore 0 to userspace 00:05:01.036 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:01.036 POWER: Cannot set governor of lcore 0 to performance 00:05:01.036 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:01.036 POWER: Cannot set governor of lcore 0 to userspace 00:05:01.036 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:01.036 POWER: Cannot set governor of lcore 0 to userspace 00:05:01.036 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:01.036 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:01.036 POWER: Unable to set Power Management Environment for lcore 0 00:05:01.036 [2024-11-20 08:17:48.542203] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:01.036 [2024-11-20 08:17:48.542376] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:01.036 [2024-11-20 08:17:48.542417] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:01.036 [2024-11-20 08:17:48.542519] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:01.036 [2024-11-20 08:17:48.542558] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:01.036 [2024-11-20 08:17:48.542679] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:01.036 08:17:48 event.event_scheduler -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:05:01.036 08:17:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:01.036 08:17:48 event.event_scheduler -- common/autotest_common.sh@566 -- # xtrace_disable 00:05:01.036 08:17:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:01.294 [2024-11-20 08:17:48.608931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:01.294 [2024-11-20 08:17:48.645109] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:01.294 08:17:48 event.event_scheduler -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:05:01.294 08:17:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:01.294 08:17:48 event.event_scheduler -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:05:01.294 08:17:48 event.event_scheduler -- common/autotest_common.sh@1114 -- # xtrace_disable 00:05:01.294 08:17:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:01.294 ************************************ 00:05:01.294 START TEST scheduler_create_thread 00:05:01.294 ************************************ 00:05:01.294 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1132 -- # scheduler_create_thread 00:05:01.294 08:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:01.294 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:05:01.294 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.294 2 00:05:01.294 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:05:01.294 08:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.295 3 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.295 4 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.295 5 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.295 6 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.295 7 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.295 8 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.295 9 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.295 10 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:05:01.295 08:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.862 08:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:05:01.862 08:17:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:01.862 08:17:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:01.862 08:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@566 -- # xtrace_disable 00:05:01.862 08:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.238 08:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:05:03.238 00:05:03.238 real 0m1.753s 00:05:03.238 user 0m0.020s 00:05:03.238 sys 0m0.005s 00:05:03.238 08:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1133 -- # xtrace_disable 00:05:03.238 08:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.238 ************************************ 00:05:03.238 END TEST scheduler_create_thread 00:05:03.238 ************************************ 00:05:03.238 08:17:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:03.238 08:17:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 57877 00:05:03.238 08:17:50 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' -z 57877 ']' 00:05:03.238 08:17:50 event.event_scheduler -- common/autotest_common.sh@961 -- # kill -0 57877 00:05:03.238 08:17:50 event.event_scheduler -- common/autotest_common.sh@962 -- # uname 00:05:03.238 08:17:50 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:05:03.238 08:17:50 event.event_scheduler -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 57877 00:05:03.238 killing process with pid 57877 00:05:03.238 08:17:50 event.event_scheduler -- common/autotest_common.sh@963 -- # process_name=reactor_2 00:05:03.238 08:17:50 event.event_scheduler -- common/autotest_common.sh@967 -- # '[' reactor_2 = sudo ']' 00:05:03.238 08:17:50 event.event_scheduler -- common/autotest_common.sh@975 -- # echo 'killing process with pid 57877' 00:05:03.238 08:17:50 event.event_scheduler -- common/autotest_common.sh@976 -- # kill 57877 00:05:03.238 08:17:50 event.event_scheduler -- common/autotest_common.sh@981 -- # wait 57877 00:05:03.495 [2024-11-20 08:17:50.888212] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:03.754 00:05:03.754 real 0m3.083s 00:05:03.754 user 0m3.808s 00:05:03.754 sys 0m0.370s 00:05:03.754 08:17:51 event.event_scheduler -- common/autotest_common.sh@1133 -- # xtrace_disable 00:05:03.754 ************************************ 00:05:03.754 END TEST event_scheduler 00:05:03.754 ************************************ 00:05:03.754 08:17:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.754 08:17:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:03.754 08:17:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:03.754 08:17:51 event -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:05:03.754 08:17:51 event -- common/autotest_common.sh@1114 -- # xtrace_disable 00:05:03.754 08:17:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.754 ************************************ 00:05:03.754 START TEST app_repeat 00:05:03.754 ************************************ 00:05:03.754 08:17:51 event.app_repeat -- common/autotest_common.sh@1132 -- # app_repeat_test 00:05:03.754 08:17:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.754 08:17:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.754 08:17:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:03.754 08:17:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.754 08:17:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:03.754 08:17:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:03.754 08:17:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:03.754 Process app_repeat pid: 57958 00:05:03.754 spdk_app_start Round 0 00:05:03.754 08:17:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=57958 00:05:03.754 08:17:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.754 08:17:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57958' 00:05:03.754 08:17:51 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:03.754 08:17:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:03.754 08:17:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:03.754 08:17:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57958 /var/tmp/spdk-nbd.sock 00:05:03.754 08:17:51 event.app_repeat -- common/autotest_common.sh@838 -- # '[' -z 57958 ']' 00:05:03.754 08:17:51 event.app_repeat -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.754 08:17:51 event.app_repeat -- common/autotest_common.sh@843 -- # local max_retries=100 00:05:03.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.754 08:17:51 event.app_repeat -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.754 08:17:51 event.app_repeat -- common/autotest_common.sh@847 -- # xtrace_disable 00:05:03.754 08:17:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.754 [2024-11-20 08:17:51.171253] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:03.754 [2024-11-20 08:17:51.172381] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57958 ] 00:05:04.013 [2024-11-20 08:17:51.320261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.013 [2024-11-20 08:17:51.382717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.013 [2024-11-20 08:17:51.382740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.014 [2024-11-20 08:17:51.442634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:04.014 08:17:51 event.app_repeat -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:05:04.014 08:17:51 event.app_repeat -- common/autotest_common.sh@871 -- # return 0 00:05:04.014 08:17:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.303 Malloc0 00:05:04.303 08:17:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.594 Malloc1 00:05:04.594 08:17:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.594 08:17:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.594 08:17:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.594 08:17:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.594 08:17:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.594 08:17:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.594 08:17:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.594 08:17:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.594 08:17:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.594 08:17:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.594 08:17:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.594 08:17:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.594 08:17:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:04.594 08:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.594 08:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.594 08:17:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:04.852 /dev/nbd0 00:05:04.852 08:17:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:04.852 08:17:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:04.852 08:17:52 event.app_repeat -- common/autotest_common.sh@875 -- # local nbd_name=nbd0 00:05:04.852 08:17:52 event.app_repeat -- common/autotest_common.sh@876 -- # local i 00:05:04.852 08:17:52 event.app_repeat -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:05:04.852 08:17:52 event.app_repeat -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:05:04.852 08:17:52 event.app_repeat -- common/autotest_common.sh@879 -- # grep -q -w nbd0 /proc/partitions 00:05:04.852 08:17:52 event.app_repeat -- common/autotest_common.sh@880 -- # break 00:05:04.852 08:17:52 event.app_repeat -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:05:04.852 08:17:52 event.app_repeat -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:05:04.852 08:17:52 event.app_repeat -- common/autotest_common.sh@892 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.852 1+0 records in 00:05:04.852 1+0 records out 00:05:04.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206087 s, 19.9 MB/s 00:05:04.852 08:17:52 event.app_repeat -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.110 08:17:52 event.app_repeat -- common/autotest_common.sh@893 -- # size=4096 00:05:05.110 08:17:52 event.app_repeat -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.110 08:17:52 event.app_repeat -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:05:05.110 08:17:52 event.app_repeat -- common/autotest_common.sh@896 -- # return 0 00:05:05.110 08:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.110 08:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.110 08:17:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.369 /dev/nbd1 00:05:05.369 08:17:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.369 08:17:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.369 08:17:52 event.app_repeat -- common/autotest_common.sh@875 -- # local nbd_name=nbd1 00:05:05.369 08:17:52 event.app_repeat -- common/autotest_common.sh@876 -- # local i 00:05:05.369 08:17:52 event.app_repeat -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:05:05.369 08:17:52 event.app_repeat -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:05:05.369 08:17:52 event.app_repeat -- common/autotest_common.sh@879 -- # grep -q -w nbd1 /proc/partitions 00:05:05.369 08:17:52 event.app_repeat -- common/autotest_common.sh@880 -- # break 00:05:05.369 08:17:52 event.app_repeat -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:05:05.369 08:17:52 event.app_repeat -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:05:05.369 08:17:52 event.app_repeat -- common/autotest_common.sh@892 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.369 1+0 records in 00:05:05.369 1+0 records out 00:05:05.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314338 s, 13.0 MB/s 00:05:05.369 08:17:52 event.app_repeat -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.369 08:17:52 event.app_repeat -- common/autotest_common.sh@893 -- # size=4096 00:05:05.369 08:17:52 event.app_repeat -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.369 08:17:52 event.app_repeat -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:05:05.369 08:17:52 event.app_repeat -- common/autotest_common.sh@896 -- # return 0 00:05:05.369 08:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.369 08:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.369 08:17:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.369 08:17:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.369 08:17:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.628 08:17:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:05.628 { 00:05:05.628 "nbd_device": "/dev/nbd0", 00:05:05.628 "bdev_name": "Malloc0" 00:05:05.628 }, 00:05:05.628 { 00:05:05.628 "nbd_device": "/dev/nbd1", 00:05:05.628 "bdev_name": "Malloc1" 00:05:05.628 } 00:05:05.628 ]' 00:05:05.628 08:17:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:05.628 { 00:05:05.628 "nbd_device": "/dev/nbd0", 00:05:05.628 "bdev_name": "Malloc0" 00:05:05.628 }, 00:05:05.628 { 00:05:05.628 "nbd_device": "/dev/nbd1", 00:05:05.628 "bdev_name": "Malloc1" 00:05:05.628 } 00:05:05.628 ]' 00:05:05.628 08:17:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.628 08:17:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.628 /dev/nbd1' 00:05:05.628 08:17:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.628 /dev/nbd1' 00:05:05.628 08:17:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.628 08:17:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:05.628 08:17:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:05.628 08:17:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:05.628 08:17:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:05.628 08:17:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:05.628 08:17:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.629 08:17:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.629 08:17:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:05.629 08:17:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.629 08:17:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:05.629 08:17:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:05.629 256+0 records in 00:05:05.629 256+0 records out 00:05:05.629 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106509 s, 98.4 MB/s 00:05:05.629 08:17:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.629 08:17:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:05.629 256+0 records in 00:05:05.629 256+0 records out 00:05:05.629 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219399 s, 47.8 MB/s 00:05:05.629 08:17:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.629 08:17:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:05.887 256+0 records in 00:05:05.887 256+0 records out 00:05:05.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272505 s, 38.5 MB/s 00:05:05.887 08:17:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:05.887 08:17:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.887 08:17:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.887 08:17:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:05.887 08:17:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.887 08:17:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:05.887 08:17:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:05.887 08:17:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.887 08:17:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:05.887 08:17:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.887 08:17:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:05.887 08:17:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.887 08:17:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:05.887 08:17:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.887 08:17:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.887 08:17:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:05.887 08:17:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:05.887 08:17:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.887 08:17:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.146 08:17:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.146 08:17:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.146 08:17:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:06.146 08:17:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.146 08:17:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.146 08:17:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.146 08:17:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.146 08:17:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.146 08:17:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.146 08:17:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.405 08:17:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.405 08:17:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.405 08:17:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.405 08:17:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.405 08:17:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.405 08:17:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.405 08:17:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.405 08:17:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.405 08:17:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.405 08:17:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.405 08:17:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.663 08:17:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:06.663 08:17:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:06.663 08:17:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.663 08:17:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:06.663 08:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.663 08:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:06.663 08:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:06.663 08:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:06.663 08:17:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:06.663 08:17:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:06.663 08:17:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:06.663 08:17:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:06.663 08:17:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:06.922 08:17:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:07.180 [2024-11-20 08:17:54.607825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.180 [2024-11-20 08:17:54.671024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.180 [2024-11-20 08:17:54.671036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.180 [2024-11-20 08:17:54.726932] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:07.180 [2024-11-20 08:17:54.727048] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.180 [2024-11-20 08:17:54.727062] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:10.464 spdk_app_start Round 1 00:05:10.464 08:17:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:10.464 08:17:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:10.464 08:17:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57958 /var/tmp/spdk-nbd.sock 00:05:10.464 08:17:57 event.app_repeat -- common/autotest_common.sh@838 -- # '[' -z 57958 ']' 00:05:10.464 08:17:57 event.app_repeat -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.464 08:17:57 event.app_repeat -- common/autotest_common.sh@843 -- # local max_retries=100 00:05:10.464 08:17:57 event.app_repeat -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.465 08:17:57 event.app_repeat -- common/autotest_common.sh@847 -- # xtrace_disable 00:05:10.465 08:17:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.465 08:17:57 event.app_repeat -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:05:10.465 08:17:57 event.app_repeat -- common/autotest_common.sh@871 -- # return 0 00:05:10.465 08:17:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.723 Malloc0 00:05:10.723 08:17:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.981 Malloc1 00:05:10.981 08:17:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.981 08:17:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.981 08:17:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.981 08:17:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:10.981 08:17:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.981 08:17:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:10.981 08:17:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.981 08:17:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.981 08:17:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.981 08:17:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:10.981 08:17:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.981 08:17:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:10.981 08:17:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:10.981 08:17:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:10.981 08:17:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.981 08:17:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.239 /dev/nbd0 00:05:11.240 08:17:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.240 08:17:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.240 08:17:58 event.app_repeat -- common/autotest_common.sh@875 -- # local nbd_name=nbd0 00:05:11.240 08:17:58 event.app_repeat -- common/autotest_common.sh@876 -- # local i 00:05:11.240 08:17:58 event.app_repeat -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:05:11.240 08:17:58 event.app_repeat -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:05:11.240 08:17:58 event.app_repeat -- common/autotest_common.sh@879 -- # grep -q -w nbd0 /proc/partitions 00:05:11.240 08:17:58 event.app_repeat -- common/autotest_common.sh@880 -- # break 00:05:11.240 08:17:58 event.app_repeat -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:05:11.240 08:17:58 event.app_repeat -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:05:11.240 08:17:58 event.app_repeat -- common/autotest_common.sh@892 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.240 1+0 records in 00:05:11.240 1+0 records out 00:05:11.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326816 s, 12.5 MB/s 00:05:11.240 08:17:58 event.app_repeat -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.240 08:17:58 event.app_repeat -- common/autotest_common.sh@893 -- # size=4096 00:05:11.240 08:17:58 event.app_repeat -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.240 08:17:58 event.app_repeat -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:05:11.240 08:17:58 event.app_repeat -- common/autotest_common.sh@896 -- # return 0 00:05:11.240 08:17:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.240 08:17:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.240 08:17:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:11.498 /dev/nbd1 00:05:11.498 08:17:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:11.498 08:17:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:11.498 08:17:58 event.app_repeat -- common/autotest_common.sh@875 -- # local nbd_name=nbd1 00:05:11.498 08:17:58 event.app_repeat -- common/autotest_common.sh@876 -- # local i 00:05:11.498 08:17:58 event.app_repeat -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:05:11.498 08:17:58 event.app_repeat -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:05:11.498 08:17:58 event.app_repeat -- common/autotest_common.sh@879 -- # grep -q -w nbd1 /proc/partitions 00:05:11.498 08:17:58 event.app_repeat -- common/autotest_common.sh@880 -- # break 00:05:11.498 08:17:58 event.app_repeat -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:05:11.498 08:17:58 event.app_repeat -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:05:11.498 08:17:58 event.app_repeat -- common/autotest_common.sh@892 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.498 1+0 records in 00:05:11.498 1+0 records out 00:05:11.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330374 s, 12.4 MB/s 00:05:11.498 08:17:58 event.app_repeat -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.498 08:17:58 event.app_repeat -- common/autotest_common.sh@893 -- # size=4096 00:05:11.498 08:17:58 event.app_repeat -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.498 08:17:58 event.app_repeat -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:05:11.498 08:17:58 event.app_repeat -- common/autotest_common.sh@896 -- # return 0 00:05:11.498 08:17:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.498 08:17:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.498 08:17:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.498 08:17:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.498 08:17:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.757 08:17:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:11.757 { 00:05:11.757 "nbd_device": "/dev/nbd0", 00:05:11.757 "bdev_name": "Malloc0" 00:05:11.757 }, 00:05:11.757 { 00:05:11.757 "nbd_device": "/dev/nbd1", 00:05:11.757 "bdev_name": "Malloc1" 00:05:11.757 } 00:05:11.757 ]' 00:05:11.757 08:17:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:11.757 { 00:05:11.757 "nbd_device": "/dev/nbd0", 00:05:11.757 "bdev_name": "Malloc0" 00:05:11.757 }, 00:05:11.757 { 00:05:11.757 "nbd_device": "/dev/nbd1", 00:05:11.757 "bdev_name": "Malloc1" 00:05:11.757 } 00:05:11.757 ]' 00:05:11.757 08:17:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.757 08:17:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:11.757 /dev/nbd1' 00:05:11.757 08:17:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:11.757 /dev/nbd1' 00:05:11.757 08:17:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.757 08:17:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:11.757 08:17:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:11.757 08:17:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:11.757 08:17:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:11.757 08:17:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:11.757 08:17:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.015 256+0 records in 00:05:12.015 256+0 records out 00:05:12.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00612371 s, 171 MB/s 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.015 256+0 records in 00:05:12.015 256+0 records out 00:05:12.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257014 s, 40.8 MB/s 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.015 256+0 records in 00:05:12.015 256+0 records out 00:05:12.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236021 s, 44.4 MB/s 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.015 08:17:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.273 08:17:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.273 08:17:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.273 08:17:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.273 08:17:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.273 08:17:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.273 08:17:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.273 08:17:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.273 08:17:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.273 08:17:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.273 08:17:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.531 08:17:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.531 08:17:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.531 08:17:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.531 08:17:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.531 08:17:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.531 08:17:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.531 08:17:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.531 08:17:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.531 08:17:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.531 08:17:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.531 08:17:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.790 08:18:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.790 08:18:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.790 08:18:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.790 08:18:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:12.790 08:18:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:12.790 08:18:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.790 08:18:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:12.790 08:18:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:12.790 08:18:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:12.790 08:18:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:12.790 08:18:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:12.790 08:18:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:12.790 08:18:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.049 08:18:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.308 [2024-11-20 08:18:00.755660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.308 [2024-11-20 08:18:00.802193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.308 [2024-11-20 08:18:00.802222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.308 [2024-11-20 08:18:00.859293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:13.308 [2024-11-20 08:18:00.859408] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.308 [2024-11-20 08:18:00.859422] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:16.594 spdk_app_start Round 2 00:05:16.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.594 08:18:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:16.594 08:18:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:16.594 08:18:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57958 /var/tmp/spdk-nbd.sock 00:05:16.594 08:18:03 event.app_repeat -- common/autotest_common.sh@838 -- # '[' -z 57958 ']' 00:05:16.594 08:18:03 event.app_repeat -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.594 08:18:03 event.app_repeat -- common/autotest_common.sh@843 -- # local max_retries=100 00:05:16.594 08:18:03 event.app_repeat -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.594 08:18:03 event.app_repeat -- common/autotest_common.sh@847 -- # xtrace_disable 00:05:16.594 08:18:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.594 08:18:03 event.app_repeat -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:05:16.594 08:18:03 event.app_repeat -- common/autotest_common.sh@871 -- # return 0 00:05:16.594 08:18:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.594 Malloc0 00:05:16.870 08:18:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.140 Malloc1 00:05:17.140 08:18:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.140 08:18:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.140 08:18:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.140 08:18:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.140 08:18:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.141 08:18:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.141 08:18:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.141 08:18:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.141 08:18:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.141 08:18:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.141 08:18:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.141 08:18:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.141 08:18:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.141 08:18:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.141 08:18:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.141 08:18:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.399 /dev/nbd0 00:05:17.399 08:18:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.399 08:18:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.399 08:18:04 event.app_repeat -- common/autotest_common.sh@875 -- # local nbd_name=nbd0 00:05:17.399 08:18:04 event.app_repeat -- common/autotest_common.sh@876 -- # local i 00:05:17.399 08:18:04 event.app_repeat -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:05:17.399 08:18:04 event.app_repeat -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:05:17.399 08:18:04 event.app_repeat -- common/autotest_common.sh@879 -- # grep -q -w nbd0 /proc/partitions 00:05:17.399 08:18:04 event.app_repeat -- common/autotest_common.sh@880 -- # break 00:05:17.399 08:18:04 event.app_repeat -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:05:17.399 08:18:04 event.app_repeat -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:05:17.399 08:18:04 event.app_repeat -- common/autotest_common.sh@892 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.399 1+0 records in 00:05:17.399 1+0 records out 00:05:17.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247571 s, 16.5 MB/s 00:05:17.399 08:18:04 event.app_repeat -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.399 08:18:04 event.app_repeat -- common/autotest_common.sh@893 -- # size=4096 00:05:17.399 08:18:04 event.app_repeat -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.399 08:18:04 event.app_repeat -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:05:17.399 08:18:04 event.app_repeat -- common/autotest_common.sh@896 -- # return 0 00:05:17.399 08:18:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.399 08:18:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.399 08:18:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.657 /dev/nbd1 00:05:17.657 08:18:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.657 08:18:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.657 08:18:05 event.app_repeat -- common/autotest_common.sh@875 -- # local nbd_name=nbd1 00:05:17.657 08:18:05 event.app_repeat -- common/autotest_common.sh@876 -- # local i 00:05:17.657 08:18:05 event.app_repeat -- common/autotest_common.sh@878 -- # (( i = 1 )) 00:05:17.657 08:18:05 event.app_repeat -- common/autotest_common.sh@878 -- # (( i <= 20 )) 00:05:17.657 08:18:05 event.app_repeat -- common/autotest_common.sh@879 -- # grep -q -w nbd1 /proc/partitions 00:05:17.657 08:18:05 event.app_repeat -- common/autotest_common.sh@880 -- # break 00:05:17.657 08:18:05 event.app_repeat -- common/autotest_common.sh@891 -- # (( i = 1 )) 00:05:17.657 08:18:05 event.app_repeat -- common/autotest_common.sh@891 -- # (( i <= 20 )) 00:05:17.657 08:18:05 event.app_repeat -- common/autotest_common.sh@892 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.657 1+0 records in 00:05:17.657 1+0 records out 00:05:17.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349617 s, 11.7 MB/s 00:05:17.657 08:18:05 event.app_repeat -- common/autotest_common.sh@893 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.657 08:18:05 event.app_repeat -- common/autotest_common.sh@893 -- # size=4096 00:05:17.657 08:18:05 event.app_repeat -- common/autotest_common.sh@894 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.657 08:18:05 event.app_repeat -- common/autotest_common.sh@895 -- # '[' 4096 '!=' 0 ']' 00:05:17.657 08:18:05 event.app_repeat -- common/autotest_common.sh@896 -- # return 0 00:05:17.657 08:18:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.657 08:18:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.657 08:18:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.657 08:18:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.657 08:18:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.915 08:18:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.915 { 00:05:17.915 "nbd_device": "/dev/nbd0", 00:05:17.915 "bdev_name": "Malloc0" 00:05:17.915 }, 00:05:17.915 { 00:05:17.915 "nbd_device": "/dev/nbd1", 00:05:17.915 "bdev_name": "Malloc1" 00:05:17.915 } 00:05:17.915 ]' 00:05:17.915 08:18:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.915 { 00:05:17.915 "nbd_device": "/dev/nbd0", 00:05:17.915 "bdev_name": "Malloc0" 00:05:17.915 }, 00:05:17.915 { 00:05:17.915 "nbd_device": "/dev/nbd1", 00:05:17.915 "bdev_name": "Malloc1" 00:05:17.915 } 00:05:17.915 ]' 00:05:17.915 08:18:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.174 /dev/nbd1' 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.174 /dev/nbd1' 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.174 256+0 records in 00:05:18.174 256+0 records out 00:05:18.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00670634 s, 156 MB/s 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.174 256+0 records in 00:05:18.174 256+0 records out 00:05:18.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212424 s, 49.4 MB/s 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.174 256+0 records in 00:05:18.174 256+0 records out 00:05:18.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267624 s, 39.2 MB/s 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.174 08:18:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.433 08:18:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.433 08:18:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.433 08:18:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.433 08:18:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.434 08:18:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.434 08:18:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.434 08:18:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.434 08:18:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.434 08:18:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.434 08:18:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.692 08:18:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.692 08:18:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.692 08:18:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.692 08:18:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.692 08:18:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.693 08:18:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.693 08:18:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.693 08:18:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.693 08:18:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.693 08:18:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.693 08:18:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.951 08:18:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.951 08:18:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.951 08:18:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.951 08:18:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.951 08:18:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.951 08:18:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.951 08:18:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.951 08:18:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.951 08:18:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.951 08:18:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.951 08:18:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.951 08:18:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.951 08:18:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.209 08:18:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:19.467 [2024-11-20 08:18:06.907679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.467 [2024-11-20 08:18:06.953346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.467 [2024-11-20 08:18:06.953358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.467 [2024-11-20 08:18:07.006289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:19.467 [2024-11-20 08:18:07.006407] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.467 [2024-11-20 08:18:07.006420] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.753 08:18:09 event.app_repeat -- event/event.sh@38 -- # waitforlisten 57958 /var/tmp/spdk-nbd.sock 00:05:22.753 08:18:09 event.app_repeat -- common/autotest_common.sh@838 -- # '[' -z 57958 ']' 00:05:22.753 08:18:09 event.app_repeat -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.753 08:18:09 event.app_repeat -- common/autotest_common.sh@843 -- # local max_retries=100 00:05:22.753 08:18:09 event.app_repeat -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.753 08:18:09 event.app_repeat -- common/autotest_common.sh@847 -- # xtrace_disable 00:05:22.753 08:18:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.753 08:18:10 event.app_repeat -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:05:22.753 08:18:10 event.app_repeat -- common/autotest_common.sh@871 -- # return 0 00:05:22.753 08:18:10 event.app_repeat -- event/event.sh@39 -- # killprocess 57958 00:05:22.753 08:18:10 event.app_repeat -- common/autotest_common.sh@957 -- # '[' -z 57958 ']' 00:05:22.753 08:18:10 event.app_repeat -- common/autotest_common.sh@961 -- # kill -0 57958 00:05:22.753 08:18:10 event.app_repeat -- common/autotest_common.sh@962 -- # uname 00:05:22.753 08:18:10 event.app_repeat -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:05:22.753 08:18:10 event.app_repeat -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 57958 00:05:22.753 killing process with pid 57958 00:05:22.753 08:18:10 event.app_repeat -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:05:22.753 08:18:10 event.app_repeat -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:05:22.753 08:18:10 event.app_repeat -- common/autotest_common.sh@975 -- # echo 'killing process with pid 57958' 00:05:22.753 08:18:10 event.app_repeat -- common/autotest_common.sh@976 -- # kill 57958 00:05:22.753 08:18:10 event.app_repeat -- common/autotest_common.sh@981 -- # wait 57958 00:05:22.753 spdk_app_start is called in Round 0. 00:05:22.753 Shutdown signal received, stop current app iteration 00:05:22.753 Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 reinitialization... 00:05:22.753 spdk_app_start is called in Round 1. 00:05:22.753 Shutdown signal received, stop current app iteration 00:05:22.753 Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 reinitialization... 00:05:22.753 spdk_app_start is called in Round 2. 00:05:22.753 Shutdown signal received, stop current app iteration 00:05:22.753 Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 reinitialization... 00:05:22.753 spdk_app_start is called in Round 3. 00:05:22.753 Shutdown signal received, stop current app iteration 00:05:22.753 08:18:10 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:22.753 08:18:10 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:22.753 00:05:22.753 real 0m19.122s 00:05:22.753 user 0m43.683s 00:05:22.753 sys 0m2.877s 00:05:22.753 ************************************ 00:05:22.753 END TEST app_repeat 00:05:22.753 ************************************ 00:05:22.753 08:18:10 event.app_repeat -- common/autotest_common.sh@1133 -- # xtrace_disable 00:05:22.753 08:18:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.753 08:18:10 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:22.753 08:18:10 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:22.753 08:18:10 event -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:05:22.753 08:18:10 event -- common/autotest_common.sh@1114 -- # xtrace_disable 00:05:22.753 08:18:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.753 ************************************ 00:05:22.753 START TEST cpu_locks 00:05:22.753 ************************************ 00:05:22.753 08:18:10 event.cpu_locks -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:23.011 * Looking for test storage... 00:05:23.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:23.011 08:18:10 event.cpu_locks -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:05:23.011 08:18:10 event.cpu_locks -- common/autotest_common.sh@1638 -- # lcov --version 00:05:23.011 08:18:10 event.cpu_locks -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:05:23.011 08:18:10 event.cpu_locks -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.011 08:18:10 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:23.011 08:18:10 event.cpu_locks -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.011 08:18:10 event.cpu_locks -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:05:23.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.011 --rc genhtml_branch_coverage=1 00:05:23.011 --rc genhtml_function_coverage=1 00:05:23.011 --rc genhtml_legend=1 00:05:23.011 --rc geninfo_all_blocks=1 00:05:23.011 --rc geninfo_unexecuted_blocks=1 00:05:23.011 00:05:23.011 ' 00:05:23.011 08:18:10 event.cpu_locks -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:05:23.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.011 --rc genhtml_branch_coverage=1 00:05:23.011 --rc genhtml_function_coverage=1 00:05:23.011 --rc genhtml_legend=1 00:05:23.011 --rc geninfo_all_blocks=1 00:05:23.011 --rc geninfo_unexecuted_blocks=1 00:05:23.011 00:05:23.011 ' 00:05:23.011 08:18:10 event.cpu_locks -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:05:23.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.011 --rc genhtml_branch_coverage=1 00:05:23.011 --rc genhtml_function_coverage=1 00:05:23.011 --rc genhtml_legend=1 00:05:23.011 --rc geninfo_all_blocks=1 00:05:23.011 --rc geninfo_unexecuted_blocks=1 00:05:23.011 00:05:23.011 ' 00:05:23.011 08:18:10 event.cpu_locks -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:05:23.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.011 --rc genhtml_branch_coverage=1 00:05:23.011 --rc genhtml_function_coverage=1 00:05:23.011 --rc genhtml_legend=1 00:05:23.011 --rc geninfo_all_blocks=1 00:05:23.011 --rc geninfo_unexecuted_blocks=1 00:05:23.011 00:05:23.011 ' 00:05:23.011 08:18:10 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:23.011 08:18:10 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:23.011 08:18:10 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:23.011 08:18:10 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:23.011 08:18:10 event.cpu_locks -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:05:23.011 08:18:10 event.cpu_locks -- common/autotest_common.sh@1114 -- # xtrace_disable 00:05:23.011 08:18:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.011 ************************************ 00:05:23.011 START TEST default_locks 00:05:23.011 ************************************ 00:05:23.011 08:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1132 -- # default_locks 00:05:23.011 08:18:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58408 00:05:23.011 08:18:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58408 00:05:23.011 08:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # '[' -z 58408 ']' 00:05:23.011 08:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.011 08:18:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.011 08:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@843 -- # local max_retries=100 00:05:23.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.011 08:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.011 08:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@847 -- # xtrace_disable 00:05:23.011 08:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.270 [2024-11-20 08:18:10.591120] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:23.270 [2024-11-20 08:18:10.591242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58408 ] 00:05:23.270 [2024-11-20 08:18:10.736060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.270 [2024-11-20 08:18:10.803833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.528 [2024-11-20 08:18:10.878177] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:23.528 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:05:23.528 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@871 -- # return 0 00:05:23.528 08:18:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58408 00:05:23.528 08:18:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58408 00:05:23.528 08:18:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.097 08:18:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58408 00:05:24.097 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' -z 58408 ']' 00:05:24.097 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@961 -- # kill -0 58408 00:05:24.097 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # uname 00:05:24.097 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:05:24.097 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 58408 00:05:24.097 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:05:24.097 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:05:24.097 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@975 -- # echo 'killing process with pid 58408' 00:05:24.097 killing process with pid 58408 00:05:24.097 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # kill 58408 00:05:24.097 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@981 -- # wait 58408 00:05:24.664 08:18:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58408 00:05:24.664 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # local es=0 00:05:24.664 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@657 -- # valid_exec_arg waitforlisten 58408 00:05:24.664 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@643 -- # local arg=waitforlisten 00:05:24.664 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:05:24.664 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@647 -- # type -t waitforlisten 00:05:24.664 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:05:24.664 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@658 -- # waitforlisten 58408 00:05:24.664 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # '[' -z 58408 ']' 00:05:24.664 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.664 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@843 -- # local max_retries=100 00:05:24.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.664 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.664 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@847 -- # xtrace_disable 00:05:24.664 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.664 ERROR: process (pid: 58408) is no longer running 00:05:24.664 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 853: kill: (58408) - No such process 00:05:24.664 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:05:24.664 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@871 -- # return 1 00:05:24.664 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@658 -- # es=1 00:05:24.664 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:05:24.664 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:05:24.664 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:05:24.664 08:18:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:24.664 08:18:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:24.664 08:18:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:24.665 08:18:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:24.665 00:05:24.665 real 0m1.451s 00:05:24.665 user 0m1.420s 00:05:24.665 sys 0m0.551s 00:05:24.665 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1133 -- # xtrace_disable 00:05:24.665 08:18:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.665 ************************************ 00:05:24.665 END TEST default_locks 00:05:24.665 ************************************ 00:05:24.665 08:18:12 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:24.665 08:18:12 event.cpu_locks -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:05:24.665 08:18:12 event.cpu_locks -- common/autotest_common.sh@1114 -- # xtrace_disable 00:05:24.665 08:18:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.665 ************************************ 00:05:24.665 START TEST default_locks_via_rpc 00:05:24.665 ************************************ 00:05:24.665 08:18:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1132 -- # default_locks_via_rpc 00:05:24.665 08:18:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58448 00:05:24.665 08:18:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58448 00:05:24.665 08:18:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.665 08:18:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # '[' -z 58448 ']' 00:05:24.665 08:18:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.665 08:18:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@843 -- # local max_retries=100 00:05:24.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.665 08:18:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.665 08:18:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@847 -- # xtrace_disable 00:05:24.665 08:18:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.665 [2024-11-20 08:18:12.102650] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:24.665 [2024-11-20 08:18:12.102796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58448 ] 00:05:24.923 [2024-11-20 08:18:12.245446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.923 [2024-11-20 08:18:12.309895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.923 [2024-11-20 08:18:12.383450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:25.860 08:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:05:25.860 08:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@871 -- # return 0 00:05:25.860 08:18:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:25.860 08:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@566 -- # xtrace_disable 00:05:25.860 08:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.860 08:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:05:25.860 08:18:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:25.860 08:18:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:25.860 08:18:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:25.860 08:18:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:25.860 08:18:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:25.860 08:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@566 -- # xtrace_disable 00:05:25.860 08:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.860 08:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:05:25.860 08:18:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58448 00:05:25.860 08:18:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58448 00:05:25.860 08:18:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.118 08:18:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58448 00:05:26.118 08:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' -z 58448 ']' 00:05:26.118 08:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@961 -- # kill -0 58448 00:05:26.118 08:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # uname 00:05:26.118 08:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:05:26.118 08:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 58448 00:05:26.118 08:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:05:26.119 08:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:05:26.119 killing process with pid 58448 00:05:26.119 08:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@975 -- # echo 'killing process with pid 58448' 00:05:26.119 08:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # kill 58448 00:05:26.119 08:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@981 -- # wait 58448 00:05:26.377 00:05:26.377 real 0m1.802s 00:05:26.377 user 0m1.964s 00:05:26.377 sys 0m0.519s 00:05:26.377 08:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1133 -- # xtrace_disable 00:05:26.377 ************************************ 00:05:26.377 END TEST default_locks_via_rpc 00:05:26.377 ************************************ 00:05:26.377 08:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.377 08:18:13 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:26.377 08:18:13 event.cpu_locks -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:05:26.377 08:18:13 event.cpu_locks -- common/autotest_common.sh@1114 -- # xtrace_disable 00:05:26.377 08:18:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.377 ************************************ 00:05:26.377 START TEST non_locking_app_on_locked_coremask 00:05:26.377 ************************************ 00:05:26.377 08:18:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1132 -- # non_locking_app_on_locked_coremask 00:05:26.377 08:18:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58498 00:05:26.377 08:18:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58498 /var/tmp/spdk.sock 00:05:26.377 08:18:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.377 08:18:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # '[' -z 58498 ']' 00:05:26.377 08:18:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.377 08:18:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@843 -- # local max_retries=100 00:05:26.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.377 08:18:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.377 08:18:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@847 -- # xtrace_disable 00:05:26.378 08:18:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.635 [2024-11-20 08:18:13.957075] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:26.635 [2024-11-20 08:18:13.957186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58498 ] 00:05:26.635 [2024-11-20 08:18:14.106955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.635 [2024-11-20 08:18:14.161945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.893 [2024-11-20 08:18:14.230342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:26.893 08:18:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:05:26.893 08:18:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@871 -- # return 0 00:05:26.893 08:18:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58507 00:05:26.893 08:18:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58507 /var/tmp/spdk2.sock 00:05:26.893 08:18:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:26.893 08:18:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # '[' -z 58507 ']' 00:05:26.893 08:18:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.893 08:18:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@843 -- # local max_retries=100 00:05:26.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.893 08:18:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.893 08:18:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@847 -- # xtrace_disable 00:05:26.893 08:18:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.153 [2024-11-20 08:18:14.505656] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:27.153 [2024-11-20 08:18:14.505777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58507 ] 00:05:27.153 [2024-11-20 08:18:14.667806] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:27.153 [2024-11-20 08:18:14.667863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.412 [2024-11-20 08:18:14.794510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.412 [2024-11-20 08:18:14.938556] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:27.979 08:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:05:27.979 08:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@871 -- # return 0 00:05:27.979 08:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58498 00:05:27.979 08:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.979 08:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58498 00:05:28.915 08:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58498 00:05:28.915 08:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' -z 58498 ']' 00:05:28.915 08:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # kill -0 58498 00:05:28.915 08:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # uname 00:05:28.915 08:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:05:28.915 08:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 58498 00:05:28.915 08:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:05:28.915 08:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:05:28.915 killing process with pid 58498 00:05:28.915 08:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@975 -- # echo 'killing process with pid 58498' 00:05:28.915 08:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # kill 58498 00:05:28.915 08:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@981 -- # wait 58498 00:05:29.851 08:18:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58507 00:05:29.851 08:18:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' -z 58507 ']' 00:05:29.851 08:18:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # kill -0 58507 00:05:29.851 08:18:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # uname 00:05:29.851 08:18:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:05:29.851 08:18:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 58507 00:05:29.851 08:18:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:05:29.851 08:18:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:05:29.851 killing process with pid 58507 00:05:29.851 08:18:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@975 -- # echo 'killing process with pid 58507' 00:05:29.851 08:18:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # kill 58507 00:05:29.851 08:18:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@981 -- # wait 58507 00:05:30.109 00:05:30.109 real 0m3.665s 00:05:30.109 user 0m4.051s 00:05:30.109 sys 0m1.084s 00:05:30.109 08:18:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1133 -- # xtrace_disable 00:05:30.109 08:18:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.109 ************************************ 00:05:30.109 END TEST non_locking_app_on_locked_coremask 00:05:30.109 ************************************ 00:05:30.109 08:18:17 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:30.109 08:18:17 event.cpu_locks -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:05:30.109 08:18:17 event.cpu_locks -- common/autotest_common.sh@1114 -- # xtrace_disable 00:05:30.109 08:18:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.109 ************************************ 00:05:30.109 START TEST locking_app_on_unlocked_coremask 00:05:30.109 ************************************ 00:05:30.109 08:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1132 -- # locking_app_on_unlocked_coremask 00:05:30.109 08:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58574 00:05:30.109 08:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58574 /var/tmp/spdk.sock 00:05:30.109 08:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # '[' -z 58574 ']' 00:05:30.109 08:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.109 08:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:30.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.109 08:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@843 -- # local max_retries=100 00:05:30.109 08:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.109 08:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@847 -- # xtrace_disable 00:05:30.109 08:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.368 [2024-11-20 08:18:17.671593] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:30.368 [2024-11-20 08:18:17.671700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58574 ] 00:05:30.368 [2024-11-20 08:18:17.815919] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.368 [2024-11-20 08:18:17.815967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.368 [2024-11-20 08:18:17.880494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.627 [2024-11-20 08:18:17.962270] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:31.195 08:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:05:31.195 08:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@871 -- # return 0 00:05:31.195 08:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58590 00:05:31.195 08:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58590 /var/tmp/spdk2.sock 00:05:31.195 08:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:31.195 08:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # '[' -z 58590 ']' 00:05:31.195 08:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.195 08:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@843 -- # local max_retries=100 00:05:31.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.195 08:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.195 08:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@847 -- # xtrace_disable 00:05:31.195 08:18:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.195 [2024-11-20 08:18:18.708974] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:31.195 [2024-11-20 08:18:18.709121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58590 ] 00:05:31.512 [2024-11-20 08:18:18.876134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.512 [2024-11-20 08:18:19.003113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.778 [2024-11-20 08:18:19.140850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:32.350 08:18:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:05:32.350 08:18:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@871 -- # return 0 00:05:32.350 08:18:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58590 00:05:32.350 08:18:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58590 00:05:32.350 08:18:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.286 08:18:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58574 00:05:33.286 08:18:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' -z 58574 ']' 00:05:33.286 08:18:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@961 -- # kill -0 58574 00:05:33.286 08:18:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # uname 00:05:33.286 08:18:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:05:33.286 08:18:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 58574 00:05:33.286 08:18:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:05:33.286 08:18:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:05:33.286 killing process with pid 58574 00:05:33.286 08:18:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@975 -- # echo 'killing process with pid 58574' 00:05:33.286 08:18:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # kill 58574 00:05:33.286 08:18:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@981 -- # wait 58574 00:05:33.854 08:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58590 00:05:33.854 08:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' -z 58590 ']' 00:05:33.854 08:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@961 -- # kill -0 58590 00:05:33.854 08:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # uname 00:05:33.854 08:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:05:33.854 08:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 58590 00:05:33.854 08:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:05:33.854 08:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:05:33.854 killing process with pid 58590 00:05:33.854 08:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@975 -- # echo 'killing process with pid 58590' 00:05:33.854 08:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # kill 58590 00:05:33.854 08:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@981 -- # wait 58590 00:05:34.420 00:05:34.420 real 0m4.175s 00:05:34.420 user 0m4.643s 00:05:34.420 sys 0m1.166s 00:05:34.420 08:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1133 -- # xtrace_disable 00:05:34.420 08:18:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.420 ************************************ 00:05:34.420 END TEST locking_app_on_unlocked_coremask 00:05:34.420 ************************************ 00:05:34.420 08:18:21 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:34.420 08:18:21 event.cpu_locks -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:05:34.420 08:18:21 event.cpu_locks -- common/autotest_common.sh@1114 -- # xtrace_disable 00:05:34.420 08:18:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.420 ************************************ 00:05:34.420 START TEST locking_app_on_locked_coremask 00:05:34.420 ************************************ 00:05:34.420 08:18:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1132 -- # locking_app_on_locked_coremask 00:05:34.420 08:18:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58657 00:05:34.420 08:18:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58657 /var/tmp/spdk.sock 00:05:34.420 08:18:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.420 08:18:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # '[' -z 58657 ']' 00:05:34.420 08:18:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.420 08:18:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@843 -- # local max_retries=100 00:05:34.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.421 08:18:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.421 08:18:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@847 -- # xtrace_disable 00:05:34.421 08:18:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.421 [2024-11-20 08:18:21.898013] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:34.421 [2024-11-20 08:18:21.898568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58657 ] 00:05:34.679 [2024-11-20 08:18:22.042268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.679 [2024-11-20 08:18:22.086607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.679 [2024-11-20 08:18:22.154452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:34.937 08:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:05:34.937 08:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@871 -- # return 0 00:05:34.937 08:18:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58671 00:05:34.937 08:18:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58671 /var/tmp/spdk2.sock 00:05:34.937 08:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # local es=0 00:05:34.937 08:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@657 -- # valid_exec_arg waitforlisten 58671 /var/tmp/spdk2.sock 00:05:34.937 08:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@643 -- # local arg=waitforlisten 00:05:34.937 08:18:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:34.937 08:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:05:34.937 08:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@647 -- # type -t waitforlisten 00:05:34.937 08:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:05:34.937 08:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@658 -- # waitforlisten 58671 /var/tmp/spdk2.sock 00:05:34.937 08:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # '[' -z 58671 ']' 00:05:34.937 08:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.937 08:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@843 -- # local max_retries=100 00:05:34.937 08:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.937 08:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@847 -- # xtrace_disable 00:05:34.937 08:18:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.937 [2024-11-20 08:18:22.422948] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:34.937 [2024-11-20 08:18:22.423051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58671 ] 00:05:35.196 [2024-11-20 08:18:22.585470] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58657 has claimed it. 00:05:35.196 [2024-11-20 08:18:22.585561] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:35.763 ERROR: process (pid: 58671) is no longer running 00:05:35.763 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 853: kill: (58671) - No such process 00:05:35.763 08:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:05:35.763 08:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@871 -- # return 1 00:05:35.763 08:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@658 -- # es=1 00:05:35.763 08:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:05:35.763 08:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:05:35.763 08:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:05:35.763 08:18:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58657 00:05:35.763 08:18:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58657 00:05:35.763 08:18:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.331 08:18:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58657 00:05:36.331 08:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' -z 58657 ']' 00:05:36.331 08:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # kill -0 58657 00:05:36.331 08:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # uname 00:05:36.331 08:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:05:36.331 08:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 58657 00:05:36.331 killing process with pid 58657 00:05:36.331 08:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:05:36.331 08:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:05:36.331 08:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@975 -- # echo 'killing process with pid 58657' 00:05:36.331 08:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # kill 58657 00:05:36.331 08:18:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@981 -- # wait 58657 00:05:36.590 00:05:36.590 real 0m2.255s 00:05:36.590 user 0m2.528s 00:05:36.590 sys 0m0.628s 00:05:36.590 08:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1133 -- # xtrace_disable 00:05:36.590 ************************************ 00:05:36.590 END TEST locking_app_on_locked_coremask 00:05:36.590 ************************************ 00:05:36.590 08:18:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.590 08:18:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:36.590 08:18:24 event.cpu_locks -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:05:36.590 08:18:24 event.cpu_locks -- common/autotest_common.sh@1114 -- # xtrace_disable 00:05:36.590 08:18:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.590 ************************************ 00:05:36.590 START TEST locking_overlapped_coremask 00:05:36.590 ************************************ 00:05:36.590 08:18:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1132 -- # locking_overlapped_coremask 00:05:36.590 08:18:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58716 00:05:36.590 08:18:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58716 /var/tmp/spdk.sock 00:05:36.590 08:18:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # '[' -z 58716 ']' 00:05:36.590 08:18:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:36.590 08:18:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.590 08:18:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@843 -- # local max_retries=100 00:05:36.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.590 08:18:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.590 08:18:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@847 -- # xtrace_disable 00:05:36.590 08:18:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.848 [2024-11-20 08:18:24.216331] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:36.848 [2024-11-20 08:18:24.216449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58716 ] 00:05:36.848 [2024-11-20 08:18:24.369144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:37.106 [2024-11-20 08:18:24.449075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.106 [2024-11-20 08:18:24.449222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.106 [2024-11-20 08:18:24.449241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.106 [2024-11-20 08:18:24.531999] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.364 08:18:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:05:37.364 08:18:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@871 -- # return 0 00:05:37.364 08:18:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58727 00:05:37.364 08:18:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:37.365 08:18:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58727 /var/tmp/spdk2.sock 00:05:37.365 08:18:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # local es=0 00:05:37.365 08:18:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@657 -- # valid_exec_arg waitforlisten 58727 /var/tmp/spdk2.sock 00:05:37.365 08:18:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@643 -- # local arg=waitforlisten 00:05:37.365 08:18:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:05:37.365 08:18:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@647 -- # type -t waitforlisten 00:05:37.365 08:18:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:05:37.365 08:18:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@658 -- # waitforlisten 58727 /var/tmp/spdk2.sock 00:05:37.365 08:18:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # '[' -z 58727 ']' 00:05:37.365 08:18:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.365 08:18:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@843 -- # local max_retries=100 00:05:37.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.365 08:18:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.365 08:18:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@847 -- # xtrace_disable 00:05:37.365 08:18:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.365 [2024-11-20 08:18:24.805426] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:37.365 [2024-11-20 08:18:24.805537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58727 ] 00:05:37.622 [2024-11-20 08:18:24.971496] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58716 has claimed it. 00:05:37.622 [2024-11-20 08:18:24.971566] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:38.188 ERROR: process (pid: 58727) is no longer running 00:05:38.188 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 853: kill: (58727) - No such process 00:05:38.188 08:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:05:38.188 08:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@871 -- # return 1 00:05:38.188 08:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@658 -- # es=1 00:05:38.188 08:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:05:38.188 08:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:05:38.188 08:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:05:38.188 08:18:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:38.188 08:18:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:38.188 08:18:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:38.188 08:18:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:38.188 08:18:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58716 00:05:38.188 08:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' -z 58716 ']' 00:05:38.188 08:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@961 -- # kill -0 58716 00:05:38.188 08:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # uname 00:05:38.188 08:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:05:38.188 08:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 58716 00:05:38.188 08:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:05:38.188 08:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:05:38.188 08:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@975 -- # echo 'killing process with pid 58716' 00:05:38.188 killing process with pid 58716 00:05:38.188 08:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # kill 58716 00:05:38.188 08:18:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@981 -- # wait 58716 00:05:38.755 00:05:38.755 real 0m1.922s 00:05:38.755 user 0m5.122s 00:05:38.755 sys 0m0.436s 00:05:38.755 08:18:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1133 -- # xtrace_disable 00:05:38.755 08:18:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.755 ************************************ 00:05:38.755 END TEST locking_overlapped_coremask 00:05:38.755 ************************************ 00:05:38.755 08:18:26 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:38.755 08:18:26 event.cpu_locks -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:05:38.755 08:18:26 event.cpu_locks -- common/autotest_common.sh@1114 -- # xtrace_disable 00:05:38.755 08:18:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.755 ************************************ 00:05:38.755 START TEST locking_overlapped_coremask_via_rpc 00:05:38.755 ************************************ 00:05:38.755 08:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1132 -- # locking_overlapped_coremask_via_rpc 00:05:38.755 08:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58772 00:05:38.755 08:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58772 /var/tmp/spdk.sock 00:05:38.755 08:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # '[' -z 58772 ']' 00:05:38.755 08:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.755 08:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:38.755 08:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@843 -- # local max_retries=100 00:05:38.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.755 08:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.755 08:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@847 -- # xtrace_disable 00:05:38.755 08:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.755 [2024-11-20 08:18:26.180326] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:38.755 [2024-11-20 08:18:26.180469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58772 ] 00:05:39.014 [2024-11-20 08:18:26.325563] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:39.014 [2024-11-20 08:18:26.325616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.014 [2024-11-20 08:18:26.398759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.014 [2024-11-20 08:18:26.398904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.014 [2024-11-20 08:18:26.398921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.014 [2024-11-20 08:18:26.496680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:39.948 08:18:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:05:39.948 08:18:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@871 -- # return 0 00:05:39.948 08:18:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58790 00:05:39.948 08:18:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:39.948 08:18:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58790 /var/tmp/spdk2.sock 00:05:39.948 08:18:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # '[' -z 58790 ']' 00:05:39.948 08:18:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.948 08:18:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@843 -- # local max_retries=100 00:05:39.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.948 08:18:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.948 08:18:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@847 -- # xtrace_disable 00:05:39.948 08:18:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.948 [2024-11-20 08:18:27.274214] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:39.948 [2024-11-20 08:18:27.274332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58790 ] 00:05:39.948 [2024-11-20 08:18:27.440636] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:39.948 [2024-11-20 08:18:27.440691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:40.206 [2024-11-20 08:18:27.576678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:40.206 [2024-11-20 08:18:27.579935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:40.206 [2024-11-20 08:18:27.579935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.206 [2024-11-20 08:18:27.733701] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.771 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:05:40.771 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@871 -- # return 0 00:05:40.771 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@566 -- # xtrace_disable 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # local es=0 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@657 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@643 -- # local arg=rpc_cmd 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@647 -- # type -t rpc_cmd 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@658 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@566 -- # xtrace_disable 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.772 [2024-11-20 08:18:28.320976] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58772 has claimed it. 00:05:40.772 request: 00:05:40.772 { 00:05:40.772 "method": "framework_enable_cpumask_locks", 00:05:40.772 "req_id": 1 00:05:40.772 } 00:05:40.772 Got JSON-RPC error response 00:05:40.772 response: 00:05:40.772 { 00:05:40.772 "code": -32603, 00:05:40.772 "message": "Failed to claim CPU core: 2" 00:05:40.772 } 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@594 -- # [[ 1 == 0 ]] 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@658 -- # es=1 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58772 /var/tmp/spdk.sock 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # '[' -z 58772 ']' 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@843 -- # local max_retries=100 00:05:40.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@847 -- # xtrace_disable 00:05:40.772 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.338 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:05:41.338 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@871 -- # return 0 00:05:41.338 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58790 /var/tmp/spdk2.sock 00:05:41.338 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # '[' -z 58790 ']' 00:05:41.338 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.338 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@843 -- # local max_retries=100 00:05:41.338 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.338 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@847 -- # xtrace_disable 00:05:41.338 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.597 ************************************ 00:05:41.597 END TEST locking_overlapped_coremask_via_rpc 00:05:41.597 ************************************ 00:05:41.597 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:05:41.597 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@871 -- # return 0 00:05:41.597 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:41.597 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:41.597 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:41.597 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:41.597 00:05:41.597 real 0m2.832s 00:05:41.597 user 0m1.532s 00:05:41.597 sys 0m0.228s 00:05:41.597 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1133 -- # xtrace_disable 00:05:41.597 08:18:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.597 08:18:28 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:41.597 08:18:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58772 ]] 00:05:41.597 08:18:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58772 00:05:41.597 08:18:28 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' -z 58772 ']' 00:05:41.597 08:18:28 event.cpu_locks -- common/autotest_common.sh@961 -- # kill -0 58772 00:05:41.597 08:18:28 event.cpu_locks -- common/autotest_common.sh@962 -- # uname 00:05:41.597 08:18:28 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:05:41.597 08:18:28 event.cpu_locks -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 58772 00:05:41.597 killing process with pid 58772 00:05:41.597 08:18:29 event.cpu_locks -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:05:41.597 08:18:29 event.cpu_locks -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:05:41.597 08:18:29 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'killing process with pid 58772' 00:05:41.597 08:18:29 event.cpu_locks -- common/autotest_common.sh@976 -- # kill 58772 00:05:41.597 08:18:29 event.cpu_locks -- common/autotest_common.sh@981 -- # wait 58772 00:05:41.855 08:18:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58790 ]] 00:05:41.855 08:18:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58790 00:05:41.855 08:18:29 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' -z 58790 ']' 00:05:41.855 08:18:29 event.cpu_locks -- common/autotest_common.sh@961 -- # kill -0 58790 00:05:41.855 08:18:29 event.cpu_locks -- common/autotest_common.sh@962 -- # uname 00:05:41.855 08:18:29 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:05:41.855 08:18:29 event.cpu_locks -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 58790 00:05:42.113 killing process with pid 58790 00:05:42.113 08:18:29 event.cpu_locks -- common/autotest_common.sh@963 -- # process_name=reactor_2 00:05:42.113 08:18:29 event.cpu_locks -- common/autotest_common.sh@967 -- # '[' reactor_2 = sudo ']' 00:05:42.113 08:18:29 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'killing process with pid 58790' 00:05:42.113 08:18:29 event.cpu_locks -- common/autotest_common.sh@976 -- # kill 58790 00:05:42.113 08:18:29 event.cpu_locks -- common/autotest_common.sh@981 -- # wait 58790 00:05:42.678 08:18:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:42.678 Process with pid 58772 is not found 00:05:42.678 Process with pid 58790 is not found 00:05:42.679 08:18:29 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:42.679 08:18:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58772 ]] 00:05:42.679 08:18:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58772 00:05:42.679 08:18:29 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' -z 58772 ']' 00:05:42.679 08:18:29 event.cpu_locks -- common/autotest_common.sh@961 -- # kill -0 58772 00:05:42.679 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 961: kill: (58772) - No such process 00:05:42.679 08:18:29 event.cpu_locks -- common/autotest_common.sh@984 -- # echo 'Process with pid 58772 is not found' 00:05:42.679 08:18:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58790 ]] 00:05:42.679 08:18:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58790 00:05:42.679 08:18:29 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' -z 58790 ']' 00:05:42.679 08:18:29 event.cpu_locks -- common/autotest_common.sh@961 -- # kill -0 58790 00:05:42.679 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 961: kill: (58790) - No such process 00:05:42.679 08:18:29 event.cpu_locks -- common/autotest_common.sh@984 -- # echo 'Process with pid 58790 is not found' 00:05:42.679 08:18:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:42.679 00:05:42.679 real 0m19.679s 00:05:42.679 user 0m35.137s 00:05:42.679 sys 0m5.649s 00:05:42.679 08:18:29 event.cpu_locks -- common/autotest_common.sh@1133 -- # xtrace_disable 00:05:42.679 ************************************ 00:05:42.679 END TEST cpu_locks 00:05:42.679 ************************************ 00:05:42.679 08:18:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.679 ************************************ 00:05:42.679 END TEST event 00:05:42.679 ************************************ 00:05:42.679 00:05:42.679 real 0m46.245s 00:05:42.679 user 1m29.222s 00:05:42.679 sys 0m9.317s 00:05:42.679 08:18:30 event -- common/autotest_common.sh@1133 -- # xtrace_disable 00:05:42.679 08:18:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.679 08:18:30 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:42.679 08:18:30 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:05:42.679 08:18:30 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:05:42.679 08:18:30 -- common/autotest_common.sh@10 -- # set +x 00:05:42.679 ************************************ 00:05:42.679 START TEST thread 00:05:42.679 ************************************ 00:05:42.679 08:18:30 thread -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:42.679 * Looking for test storage... 00:05:42.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:42.679 08:18:30 thread -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:05:42.679 08:18:30 thread -- common/autotest_common.sh@1638 -- # lcov --version 00:05:42.679 08:18:30 thread -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:05:42.938 08:18:30 thread -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:05:42.938 08:18:30 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.938 08:18:30 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.938 08:18:30 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.938 08:18:30 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.938 08:18:30 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.938 08:18:30 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.938 08:18:30 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.938 08:18:30 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.938 08:18:30 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.938 08:18:30 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.938 08:18:30 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.938 08:18:30 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:42.938 08:18:30 thread -- scripts/common.sh@345 -- # : 1 00:05:42.938 08:18:30 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.938 08:18:30 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.938 08:18:30 thread -- scripts/common.sh@365 -- # decimal 1 00:05:42.938 08:18:30 thread -- scripts/common.sh@353 -- # local d=1 00:05:42.938 08:18:30 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.938 08:18:30 thread -- scripts/common.sh@355 -- # echo 1 00:05:42.938 08:18:30 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.938 08:18:30 thread -- scripts/common.sh@366 -- # decimal 2 00:05:42.938 08:18:30 thread -- scripts/common.sh@353 -- # local d=2 00:05:42.938 08:18:30 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.938 08:18:30 thread -- scripts/common.sh@355 -- # echo 2 00:05:42.938 08:18:30 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.938 08:18:30 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.938 08:18:30 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.938 08:18:30 thread -- scripts/common.sh@368 -- # return 0 00:05:42.938 08:18:30 thread -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.938 08:18:30 thread -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:05:42.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.938 --rc genhtml_branch_coverage=1 00:05:42.938 --rc genhtml_function_coverage=1 00:05:42.938 --rc genhtml_legend=1 00:05:42.938 --rc geninfo_all_blocks=1 00:05:42.938 --rc geninfo_unexecuted_blocks=1 00:05:42.938 00:05:42.938 ' 00:05:42.938 08:18:30 thread -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:05:42.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.938 --rc genhtml_branch_coverage=1 00:05:42.938 --rc genhtml_function_coverage=1 00:05:42.938 --rc genhtml_legend=1 00:05:42.938 --rc geninfo_all_blocks=1 00:05:42.938 --rc geninfo_unexecuted_blocks=1 00:05:42.938 00:05:42.938 ' 00:05:42.938 08:18:30 thread -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:05:42.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.938 --rc genhtml_branch_coverage=1 00:05:42.938 --rc genhtml_function_coverage=1 00:05:42.938 --rc genhtml_legend=1 00:05:42.938 --rc geninfo_all_blocks=1 00:05:42.938 --rc geninfo_unexecuted_blocks=1 00:05:42.938 00:05:42.938 ' 00:05:42.938 08:18:30 thread -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:05:42.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.938 --rc genhtml_branch_coverage=1 00:05:42.938 --rc genhtml_function_coverage=1 00:05:42.938 --rc genhtml_legend=1 00:05:42.938 --rc geninfo_all_blocks=1 00:05:42.938 --rc geninfo_unexecuted_blocks=1 00:05:42.938 00:05:42.938 ' 00:05:42.938 08:18:30 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:42.938 08:18:30 thread -- common/autotest_common.sh@1108 -- # '[' 8 -le 1 ']' 00:05:42.938 08:18:30 thread -- common/autotest_common.sh@1114 -- # xtrace_disable 00:05:42.938 08:18:30 thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.938 ************************************ 00:05:42.938 START TEST thread_poller_perf 00:05:42.938 ************************************ 00:05:42.938 08:18:30 thread.thread_poller_perf -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:42.938 [2024-11-20 08:18:30.309426] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:42.938 [2024-11-20 08:18:30.309669] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58932 ] 00:05:42.938 [2024-11-20 08:18:30.451296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.196 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:43.196 [2024-11-20 08:18:30.508297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.132 [2024-11-20T08:18:31.693Z] ====================================== 00:05:44.132 [2024-11-20T08:18:31.693Z] busy:2212759988 (cyc) 00:05:44.132 [2024-11-20T08:18:31.693Z] total_run_count: 357000 00:05:44.132 [2024-11-20T08:18:31.693Z] tsc_hz: 2200000000 (cyc) 00:05:44.132 [2024-11-20T08:18:31.693Z] ====================================== 00:05:44.132 [2024-11-20T08:18:31.693Z] poller_cost: 6198 (cyc), 2817 (nsec) 00:05:44.132 00:05:44.132 real 0m1.274s 00:05:44.132 user 0m1.129s 00:05:44.132 sys 0m0.039s 00:05:44.132 08:18:31 thread.thread_poller_perf -- common/autotest_common.sh@1133 -- # xtrace_disable 00:05:44.132 ************************************ 00:05:44.132 END TEST thread_poller_perf 00:05:44.132 ************************************ 00:05:44.132 08:18:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:44.132 08:18:31 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:44.133 08:18:31 thread -- common/autotest_common.sh@1108 -- # '[' 8 -le 1 ']' 00:05:44.133 08:18:31 thread -- common/autotest_common.sh@1114 -- # xtrace_disable 00:05:44.133 08:18:31 thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.133 ************************************ 00:05:44.133 START TEST thread_poller_perf 00:05:44.133 ************************************ 00:05:44.133 08:18:31 thread.thread_poller_perf -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:44.133 [2024-11-20 08:18:31.637454] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:44.133 [2024-11-20 08:18:31.637551] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58968 ] 00:05:44.391 [2024-11-20 08:18:31.785340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.391 [2024-11-20 08:18:31.840752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.391 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:45.333 [2024-11-20T08:18:32.894Z] ====================================== 00:05:45.333 [2024-11-20T08:18:32.894Z] busy:2202141258 (cyc) 00:05:45.333 [2024-11-20T08:18:32.894Z] total_run_count: 4378000 00:05:45.333 [2024-11-20T08:18:32.894Z] tsc_hz: 2200000000 (cyc) 00:05:45.333 [2024-11-20T08:18:32.894Z] ====================================== 00:05:45.333 [2024-11-20T08:18:32.894Z] poller_cost: 503 (cyc), 228 (nsec) 00:05:45.333 00:05:45.333 real 0m1.271s 00:05:45.333 user 0m1.121s 00:05:45.333 sys 0m0.043s 00:05:45.333 08:18:32 thread.thread_poller_perf -- common/autotest_common.sh@1133 -- # xtrace_disable 00:05:45.333 08:18:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:45.333 ************************************ 00:05:45.333 END TEST thread_poller_perf 00:05:45.333 ************************************ 00:05:45.591 08:18:32 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:45.591 ************************************ 00:05:45.591 END TEST thread 00:05:45.591 ************************************ 00:05:45.591 00:05:45.591 real 0m2.858s 00:05:45.591 user 0m2.381s 00:05:45.591 sys 0m0.261s 00:05:45.591 08:18:32 thread -- common/autotest_common.sh@1133 -- # xtrace_disable 00:05:45.591 08:18:32 thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.591 08:18:32 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:45.591 08:18:32 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:45.591 08:18:32 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:05:45.591 08:18:32 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:05:45.591 08:18:32 -- common/autotest_common.sh@10 -- # set +x 00:05:45.591 ************************************ 00:05:45.591 START TEST app_cmdline 00:05:45.591 ************************************ 00:05:45.591 08:18:32 app_cmdline -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:45.591 * Looking for test storage... 00:05:45.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:45.591 08:18:33 app_cmdline -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:05:45.591 08:18:33 app_cmdline -- common/autotest_common.sh@1638 -- # lcov --version 00:05:45.591 08:18:33 app_cmdline -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:05:45.850 08:18:33 app_cmdline -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:45.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.850 08:18:33 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:45.850 08:18:33 app_cmdline -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.850 08:18:33 app_cmdline -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:05:45.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.850 --rc genhtml_branch_coverage=1 00:05:45.850 --rc genhtml_function_coverage=1 00:05:45.850 --rc genhtml_legend=1 00:05:45.850 --rc geninfo_all_blocks=1 00:05:45.850 --rc geninfo_unexecuted_blocks=1 00:05:45.850 00:05:45.850 ' 00:05:45.850 08:18:33 app_cmdline -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:05:45.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.850 --rc genhtml_branch_coverage=1 00:05:45.850 --rc genhtml_function_coverage=1 00:05:45.850 --rc genhtml_legend=1 00:05:45.850 --rc geninfo_all_blocks=1 00:05:45.850 --rc geninfo_unexecuted_blocks=1 00:05:45.850 00:05:45.850 ' 00:05:45.850 08:18:33 app_cmdline -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:05:45.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.850 --rc genhtml_branch_coverage=1 00:05:45.850 --rc genhtml_function_coverage=1 00:05:45.850 --rc genhtml_legend=1 00:05:45.850 --rc geninfo_all_blocks=1 00:05:45.850 --rc geninfo_unexecuted_blocks=1 00:05:45.850 00:05:45.850 ' 00:05:45.850 08:18:33 app_cmdline -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:05:45.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.850 --rc genhtml_branch_coverage=1 00:05:45.850 --rc genhtml_function_coverage=1 00:05:45.850 --rc genhtml_legend=1 00:05:45.850 --rc geninfo_all_blocks=1 00:05:45.850 --rc geninfo_unexecuted_blocks=1 00:05:45.850 00:05:45.850 ' 00:05:45.850 08:18:33 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:45.850 08:18:33 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59051 00:05:45.850 08:18:33 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59051 00:05:45.850 08:18:33 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:45.850 08:18:33 app_cmdline -- common/autotest_common.sh@838 -- # '[' -z 59051 ']' 00:05:45.850 08:18:33 app_cmdline -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.850 08:18:33 app_cmdline -- common/autotest_common.sh@843 -- # local max_retries=100 00:05:45.850 08:18:33 app_cmdline -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.850 08:18:33 app_cmdline -- common/autotest_common.sh@847 -- # xtrace_disable 00:05:45.850 08:18:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:45.850 [2024-11-20 08:18:33.232941] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:45.850 [2024-11-20 08:18:33.233219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59051 ] 00:05:45.850 [2024-11-20 08:18:33.382936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.108 [2024-11-20 08:18:33.441263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.108 [2024-11-20 08:18:33.513212] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:46.366 08:18:33 app_cmdline -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:05:46.366 08:18:33 app_cmdline -- common/autotest_common.sh@871 -- # return 0 00:05:46.366 08:18:33 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:46.624 { 00:05:46.624 "version": "SPDK v25.01-pre git sha1 717acfa62", 00:05:46.624 "fields": { 00:05:46.624 "major": 25, 00:05:46.624 "minor": 1, 00:05:46.624 "patch": 0, 00:05:46.624 "suffix": "-pre", 00:05:46.624 "commit": "717acfa62" 00:05:46.624 } 00:05:46.624 } 00:05:46.624 08:18:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:46.624 08:18:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:46.624 08:18:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:46.624 08:18:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:46.624 08:18:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:46.624 08:18:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:46.624 08:18:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:46.624 08:18:33 app_cmdline -- common/autotest_common.sh@566 -- # xtrace_disable 00:05:46.624 08:18:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:46.624 08:18:33 app_cmdline -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:05:46.624 08:18:34 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:46.624 08:18:34 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:46.624 08:18:34 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:46.624 08:18:34 app_cmdline -- common/autotest_common.sh@655 -- # local es=0 00:05:46.624 08:18:34 app_cmdline -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:46.625 08:18:34 app_cmdline -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:46.625 08:18:34 app_cmdline -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:05:46.625 08:18:34 app_cmdline -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:46.625 08:18:34 app_cmdline -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:05:46.625 08:18:34 app_cmdline -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:46.625 08:18:34 app_cmdline -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:05:46.625 08:18:34 app_cmdline -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:46.625 08:18:34 app_cmdline -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:46.625 08:18:34 app_cmdline -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:46.883 request: 00:05:46.883 { 00:05:46.883 "method": "env_dpdk_get_mem_stats", 00:05:46.883 "req_id": 1 00:05:46.883 } 00:05:46.883 Got JSON-RPC error response 00:05:46.883 response: 00:05:46.883 { 00:05:46.883 "code": -32601, 00:05:46.883 "message": "Method not found" 00:05:46.883 } 00:05:46.883 08:18:34 app_cmdline -- common/autotest_common.sh@658 -- # es=1 00:05:46.883 08:18:34 app_cmdline -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:05:46.883 08:18:34 app_cmdline -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:05:46.883 08:18:34 app_cmdline -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:05:46.883 08:18:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59051 00:05:46.883 08:18:34 app_cmdline -- common/autotest_common.sh@957 -- # '[' -z 59051 ']' 00:05:46.883 08:18:34 app_cmdline -- common/autotest_common.sh@961 -- # kill -0 59051 00:05:46.883 08:18:34 app_cmdline -- common/autotest_common.sh@962 -- # uname 00:05:46.883 08:18:34 app_cmdline -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:05:46.883 08:18:34 app_cmdline -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 59051 00:05:46.883 killing process with pid 59051 00:05:46.883 08:18:34 app_cmdline -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:05:46.883 08:18:34 app_cmdline -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:05:46.883 08:18:34 app_cmdline -- common/autotest_common.sh@975 -- # echo 'killing process with pid 59051' 00:05:46.883 08:18:34 app_cmdline -- common/autotest_common.sh@976 -- # kill 59051 00:05:46.883 08:18:34 app_cmdline -- common/autotest_common.sh@981 -- # wait 59051 00:05:47.449 00:05:47.449 real 0m1.737s 00:05:47.449 user 0m2.082s 00:05:47.449 sys 0m0.492s 00:05:47.449 08:18:34 app_cmdline -- common/autotest_common.sh@1133 -- # xtrace_disable 00:05:47.449 ************************************ 00:05:47.449 END TEST app_cmdline 00:05:47.449 ************************************ 00:05:47.449 08:18:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:47.449 08:18:34 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:47.449 08:18:34 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:05:47.449 08:18:34 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:05:47.449 08:18:34 -- common/autotest_common.sh@10 -- # set +x 00:05:47.449 ************************************ 00:05:47.449 START TEST version 00:05:47.449 ************************************ 00:05:47.449 08:18:34 version -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:47.449 * Looking for test storage... 00:05:47.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:47.449 08:18:34 version -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:05:47.449 08:18:34 version -- common/autotest_common.sh@1638 -- # lcov --version 00:05:47.449 08:18:34 version -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:05:47.449 08:18:34 version -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:05:47.449 08:18:34 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.449 08:18:34 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.449 08:18:34 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.449 08:18:34 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.449 08:18:34 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.449 08:18:34 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.449 08:18:34 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.449 08:18:34 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.449 08:18:34 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.449 08:18:34 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.449 08:18:34 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.449 08:18:34 version -- scripts/common.sh@344 -- # case "$op" in 00:05:47.449 08:18:34 version -- scripts/common.sh@345 -- # : 1 00:05:47.449 08:18:34 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.449 08:18:34 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.449 08:18:34 version -- scripts/common.sh@365 -- # decimal 1 00:05:47.449 08:18:34 version -- scripts/common.sh@353 -- # local d=1 00:05:47.449 08:18:34 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.449 08:18:34 version -- scripts/common.sh@355 -- # echo 1 00:05:47.449 08:18:34 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.449 08:18:34 version -- scripts/common.sh@366 -- # decimal 2 00:05:47.449 08:18:34 version -- scripts/common.sh@353 -- # local d=2 00:05:47.449 08:18:34 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.449 08:18:34 version -- scripts/common.sh@355 -- # echo 2 00:05:47.449 08:18:34 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.449 08:18:34 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.449 08:18:34 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.449 08:18:34 version -- scripts/common.sh@368 -- # return 0 00:05:47.449 08:18:34 version -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.449 08:18:34 version -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:05:47.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.449 --rc genhtml_branch_coverage=1 00:05:47.449 --rc genhtml_function_coverage=1 00:05:47.449 --rc genhtml_legend=1 00:05:47.449 --rc geninfo_all_blocks=1 00:05:47.449 --rc geninfo_unexecuted_blocks=1 00:05:47.449 00:05:47.449 ' 00:05:47.449 08:18:34 version -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:05:47.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.449 --rc genhtml_branch_coverage=1 00:05:47.449 --rc genhtml_function_coverage=1 00:05:47.449 --rc genhtml_legend=1 00:05:47.449 --rc geninfo_all_blocks=1 00:05:47.449 --rc geninfo_unexecuted_blocks=1 00:05:47.449 00:05:47.449 ' 00:05:47.449 08:18:34 version -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:05:47.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.449 --rc genhtml_branch_coverage=1 00:05:47.449 --rc genhtml_function_coverage=1 00:05:47.449 --rc genhtml_legend=1 00:05:47.449 --rc geninfo_all_blocks=1 00:05:47.449 --rc geninfo_unexecuted_blocks=1 00:05:47.449 00:05:47.449 ' 00:05:47.449 08:18:34 version -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:05:47.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.449 --rc genhtml_branch_coverage=1 00:05:47.449 --rc genhtml_function_coverage=1 00:05:47.449 --rc genhtml_legend=1 00:05:47.449 --rc geninfo_all_blocks=1 00:05:47.449 --rc geninfo_unexecuted_blocks=1 00:05:47.449 00:05:47.449 ' 00:05:47.449 08:18:34 version -- app/version.sh@17 -- # get_header_version major 00:05:47.449 08:18:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:47.449 08:18:34 version -- app/version.sh@14 -- # tr -d '"' 00:05:47.449 08:18:34 version -- app/version.sh@14 -- # cut -f2 00:05:47.449 08:18:34 version -- app/version.sh@17 -- # major=25 00:05:47.449 08:18:34 version -- app/version.sh@18 -- # get_header_version minor 00:05:47.449 08:18:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:47.449 08:18:34 version -- app/version.sh@14 -- # cut -f2 00:05:47.449 08:18:34 version -- app/version.sh@14 -- # tr -d '"' 00:05:47.449 08:18:34 version -- app/version.sh@18 -- # minor=1 00:05:47.449 08:18:34 version -- app/version.sh@19 -- # get_header_version patch 00:05:47.449 08:18:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:47.449 08:18:35 version -- app/version.sh@14 -- # cut -f2 00:05:47.449 08:18:35 version -- app/version.sh@14 -- # tr -d '"' 00:05:47.449 08:18:35 version -- app/version.sh@19 -- # patch=0 00:05:47.708 08:18:35 version -- app/version.sh@20 -- # get_header_version suffix 00:05:47.708 08:18:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:47.708 08:18:35 version -- app/version.sh@14 -- # cut -f2 00:05:47.708 08:18:35 version -- app/version.sh@14 -- # tr -d '"' 00:05:47.708 08:18:35 version -- app/version.sh@20 -- # suffix=-pre 00:05:47.708 08:18:35 version -- app/version.sh@22 -- # version=25.1 00:05:47.708 08:18:35 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:47.708 08:18:35 version -- app/version.sh@28 -- # version=25.1rc0 00:05:47.708 08:18:35 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:47.708 08:18:35 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:47.708 08:18:35 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:47.708 08:18:35 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:47.708 00:05:47.708 real 0m0.281s 00:05:47.708 user 0m0.172s 00:05:47.708 sys 0m0.144s 00:05:47.708 ************************************ 00:05:47.708 END TEST version 00:05:47.708 ************************************ 00:05:47.708 08:18:35 version -- common/autotest_common.sh@1133 -- # xtrace_disable 00:05:47.708 08:18:35 version -- common/autotest_common.sh@10 -- # set +x 00:05:47.708 08:18:35 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:47.708 08:18:35 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:47.708 08:18:35 -- spdk/autotest.sh@194 -- # uname -s 00:05:47.708 08:18:35 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:47.708 08:18:35 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:47.708 08:18:35 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:05:47.708 08:18:35 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:05:47.708 08:18:35 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:47.708 08:18:35 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:05:47.708 08:18:35 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:05:47.708 08:18:35 -- common/autotest_common.sh@10 -- # set +x 00:05:47.708 ************************************ 00:05:47.708 START TEST spdk_dd 00:05:47.708 ************************************ 00:05:47.708 08:18:35 spdk_dd -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:47.708 * Looking for test storage... 00:05:47.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:47.708 08:18:35 spdk_dd -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:05:47.708 08:18:35 spdk_dd -- common/autotest_common.sh@1638 -- # lcov --version 00:05:47.708 08:18:35 spdk_dd -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:05:47.967 08:18:35 spdk_dd -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@345 -- # : 1 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@368 -- # return 0 00:05:47.967 08:18:35 spdk_dd -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.967 08:18:35 spdk_dd -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:05:47.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.967 --rc genhtml_branch_coverage=1 00:05:47.967 --rc genhtml_function_coverage=1 00:05:47.967 --rc genhtml_legend=1 00:05:47.967 --rc geninfo_all_blocks=1 00:05:47.967 --rc geninfo_unexecuted_blocks=1 00:05:47.967 00:05:47.967 ' 00:05:47.967 08:18:35 spdk_dd -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:05:47.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.967 --rc genhtml_branch_coverage=1 00:05:47.967 --rc genhtml_function_coverage=1 00:05:47.967 --rc genhtml_legend=1 00:05:47.967 --rc geninfo_all_blocks=1 00:05:47.967 --rc geninfo_unexecuted_blocks=1 00:05:47.967 00:05:47.967 ' 00:05:47.967 08:18:35 spdk_dd -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:05:47.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.967 --rc genhtml_branch_coverage=1 00:05:47.967 --rc genhtml_function_coverage=1 00:05:47.967 --rc genhtml_legend=1 00:05:47.967 --rc geninfo_all_blocks=1 00:05:47.967 --rc geninfo_unexecuted_blocks=1 00:05:47.967 00:05:47.967 ' 00:05:47.967 08:18:35 spdk_dd -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:05:47.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.967 --rc genhtml_branch_coverage=1 00:05:47.967 --rc genhtml_function_coverage=1 00:05:47.967 --rc genhtml_legend=1 00:05:47.967 --rc geninfo_all_blocks=1 00:05:47.967 --rc geninfo_unexecuted_blocks=1 00:05:47.967 00:05:47.967 ' 00:05:47.967 08:18:35 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.967 08:18:35 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.967 08:18:35 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.967 08:18:35 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.967 08:18:35 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.967 08:18:35 spdk_dd -- paths/export.sh@5 -- # export PATH 00:05:47.967 08:18:35 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.967 08:18:35 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:48.225 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:48.225 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:48.225 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:48.226 08:18:35 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:48.226 08:18:35 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@233 -- # local class 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@235 -- # local progif 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@236 -- # class=01 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:05:48.226 08:18:35 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:48.226 08:18:35 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:05:48.226 08:18:35 spdk_dd -- dd/common.sh@139 -- # local lib 00:05:48.226 08:18:35 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:48.226 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.226 08:18:35 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:48.226 08:18:35 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.484 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:48.485 * spdk_dd linked to liburing 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:05:48.485 08:18:35 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:05:48.485 08:18:35 spdk_dd -- dd/common.sh@153 -- # return 0 00:05:48.485 08:18:35 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:48.485 08:18:35 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:48.485 08:18:35 spdk_dd -- common/autotest_common.sh@1108 -- # '[' 4 -le 1 ']' 00:05:48.485 08:18:35 spdk_dd -- common/autotest_common.sh@1114 -- # xtrace_disable 00:05:48.485 08:18:35 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:48.485 ************************************ 00:05:48.485 START TEST spdk_dd_basic_rw 00:05:48.485 ************************************ 00:05:48.485 08:18:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:48.485 * Looking for test storage... 00:05:48.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:48.485 08:18:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:05:48.485 08:18:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1638 -- # lcov --version 00:05:48.485 08:18:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:05:48.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.485 --rc genhtml_branch_coverage=1 00:05:48.485 --rc genhtml_function_coverage=1 00:05:48.485 --rc genhtml_legend=1 00:05:48.485 --rc geninfo_all_blocks=1 00:05:48.485 --rc geninfo_unexecuted_blocks=1 00:05:48.485 00:05:48.485 ' 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:05:48.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.485 --rc genhtml_branch_coverage=1 00:05:48.485 --rc genhtml_function_coverage=1 00:05:48.485 --rc genhtml_legend=1 00:05:48.485 --rc geninfo_all_blocks=1 00:05:48.485 --rc geninfo_unexecuted_blocks=1 00:05:48.485 00:05:48.485 ' 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:05:48.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.485 --rc genhtml_branch_coverage=1 00:05:48.485 --rc genhtml_function_coverage=1 00:05:48.485 --rc genhtml_legend=1 00:05:48.485 --rc geninfo_all_blocks=1 00:05:48.485 --rc geninfo_unexecuted_blocks=1 00:05:48.485 00:05:48.485 ' 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:05:48.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.485 --rc genhtml_branch_coverage=1 00:05:48.485 --rc genhtml_function_coverage=1 00:05:48.485 --rc genhtml_legend=1 00:05:48.485 --rc geninfo_all_blocks=1 00:05:48.485 --rc geninfo_unexecuted_blocks=1 00:05:48.485 00:05:48.485 ' 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.485 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:48.745 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:48.745 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:48.745 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:05:48.745 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:48.745 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:48.745 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:48.745 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:48.745 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:48.745 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:05:48.745 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:05:48.745 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:05:48.745 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:05:48.746 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 18 Data Units Written: 3 Host Read Commands: 398 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:48.746 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:05:48.746 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 18 Data Units Written: 3 Host Read Commands: 398 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:48.746 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:05:48.746 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:05:48.746 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:48.746 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:48.746 08:18:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1108 -- # '[' 8 -le 1 ']' 00:05:48.746 08:18:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1114 -- # xtrace_disable 00:05:48.746 08:18:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:48.746 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:05:48.746 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:05:48.746 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:48.746 08:18:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:48.746 ************************************ 00:05:48.746 START TEST dd_bs_lt_native_bs 00:05:48.746 ************************************ 00:05:48.746 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1132 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:48.746 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # local es=0 00:05:48.746 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:48.746 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:48.746 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:05:48.746 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:48.747 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:05:48.747 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:48.747 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:05:48.747 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:48.747 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:48.747 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:49.005 { 00:05:49.005 "subsystems": [ 00:05:49.005 { 00:05:49.005 "subsystem": "bdev", 00:05:49.005 "config": [ 00:05:49.005 { 00:05:49.005 "params": { 00:05:49.005 "trtype": "pcie", 00:05:49.005 "traddr": "0000:00:10.0", 00:05:49.005 "name": "Nvme0" 00:05:49.005 }, 00:05:49.005 "method": "bdev_nvme_attach_controller" 00:05:49.005 }, 00:05:49.005 { 00:05:49.005 "method": "bdev_wait_for_examine" 00:05:49.005 } 00:05:49.005 ] 00:05:49.005 } 00:05:49.005 ] 00:05:49.005 } 00:05:49.005 [2024-11-20 08:18:36.323071] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:49.005 [2024-11-20 08:18:36.323171] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59413 ] 00:05:49.005 [2024-11-20 08:18:36.476173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.005 [2024-11-20 08:18:36.535160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.263 [2024-11-20 08:18:36.592402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:49.263 [2024-11-20 08:18:36.702205] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:49.263 [2024-11-20 08:18:36.702275] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:49.521 [2024-11-20 08:18:36.826959] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:49.521 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@658 -- # es=234 00:05:49.521 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:05:49.521 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@667 -- # es=106 00:05:49.521 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # case "$es" in 00:05:49.521 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # es=1 00:05:49.521 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:05:49.521 00:05:49.521 real 0m0.625s 00:05:49.521 user 0m0.418s 00:05:49.521 sys 0m0.161s 00:05:49.521 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1133 -- # xtrace_disable 00:05:49.521 ************************************ 00:05:49.521 END TEST dd_bs_lt_native_bs 00:05:49.521 ************************************ 00:05:49.521 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:05:49.521 08:18:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:05:49.521 08:18:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:05:49.521 08:18:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1114 -- # xtrace_disable 00:05:49.521 08:18:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:49.521 ************************************ 00:05:49.521 START TEST dd_rw 00:05:49.521 ************************************ 00:05:49.521 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1132 -- # basic_rw 4096 00:05:49.521 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:05:49.521 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:05:49.521 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:05:49.521 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:05:49.521 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:49.521 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:49.521 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:49.521 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:49.521 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:49.522 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:49.522 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:49.522 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:49.522 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:49.522 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:49.522 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:49.522 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:49.522 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:49.522 08:18:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:50.087 08:18:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:05:50.087 08:18:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:50.087 08:18:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:50.087 08:18:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:50.344 [2024-11-20 08:18:37.660472] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:50.344 [2024-11-20 08:18:37.660551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59449 ] 00:05:50.344 { 00:05:50.344 "subsystems": [ 00:05:50.344 { 00:05:50.344 "subsystem": "bdev", 00:05:50.344 "config": [ 00:05:50.344 { 00:05:50.344 "params": { 00:05:50.344 "trtype": "pcie", 00:05:50.344 "traddr": "0000:00:10.0", 00:05:50.344 "name": "Nvme0" 00:05:50.344 }, 00:05:50.344 "method": "bdev_nvme_attach_controller" 00:05:50.344 }, 00:05:50.344 { 00:05:50.344 "method": "bdev_wait_for_examine" 00:05:50.344 } 00:05:50.344 ] 00:05:50.344 } 00:05:50.344 ] 00:05:50.344 } 00:05:50.344 [2024-11-20 08:18:37.803217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.344 [2024-11-20 08:18:37.854854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.602 [2024-11-20 08:18:37.915079] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.602  [2024-11-20T08:18:38.422Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:50.861 00:05:50.861 08:18:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:05:50.861 08:18:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:50.861 08:18:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:50.861 08:18:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:50.861 { 00:05:50.861 "subsystems": [ 00:05:50.861 { 00:05:50.861 "subsystem": "bdev", 00:05:50.861 "config": [ 00:05:50.861 { 00:05:50.861 "params": { 00:05:50.861 "trtype": "pcie", 00:05:50.861 "traddr": "0000:00:10.0", 00:05:50.861 "name": "Nvme0" 00:05:50.861 }, 00:05:50.861 "method": "bdev_nvme_attach_controller" 00:05:50.861 }, 00:05:50.861 { 00:05:50.861 "method": "bdev_wait_for_examine" 00:05:50.861 } 00:05:50.861 ] 00:05:50.861 } 00:05:50.861 ] 00:05:50.861 } 00:05:50.861 [2024-11-20 08:18:38.298176] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:50.861 [2024-11-20 08:18:38.298326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59463 ] 00:05:51.119 [2024-11-20 08:18:38.448078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.119 [2024-11-20 08:18:38.507525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.119 [2024-11-20 08:18:38.566973] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:51.377  [2024-11-20T08:18:38.938Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:51.377 00:05:51.377 08:18:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:51.377 08:18:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:51.377 08:18:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:51.377 08:18:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:51.377 08:18:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:51.377 08:18:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:51.377 08:18:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:51.377 08:18:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:51.377 08:18:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:51.377 08:18:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:51.377 08:18:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:51.635 [2024-11-20 08:18:38.941488] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:51.635 [2024-11-20 08:18:38.941591] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59484 ] 00:05:51.635 { 00:05:51.635 "subsystems": [ 00:05:51.635 { 00:05:51.635 "subsystem": "bdev", 00:05:51.635 "config": [ 00:05:51.635 { 00:05:51.635 "params": { 00:05:51.635 "trtype": "pcie", 00:05:51.635 "traddr": "0000:00:10.0", 00:05:51.635 "name": "Nvme0" 00:05:51.635 }, 00:05:51.635 "method": "bdev_nvme_attach_controller" 00:05:51.635 }, 00:05:51.635 { 00:05:51.635 "method": "bdev_wait_for_examine" 00:05:51.635 } 00:05:51.635 ] 00:05:51.635 } 00:05:51.635 ] 00:05:51.635 } 00:05:51.635 [2024-11-20 08:18:39.089624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.635 [2024-11-20 08:18:39.146028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.892 [2024-11-20 08:18:39.199924] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:51.892  [2024-11-20T08:18:39.710Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:52.149 00:05:52.149 08:18:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:52.149 08:18:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:52.149 08:18:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:52.149 08:18:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:52.149 08:18:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:52.149 08:18:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:52.149 08:18:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:52.715 08:18:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:05:52.715 08:18:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:52.715 08:18:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:52.715 08:18:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:52.715 [2024-11-20 08:18:40.122438] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:52.715 [2024-11-20 08:18:40.122870] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59503 ] 00:05:52.715 { 00:05:52.715 "subsystems": [ 00:05:52.715 { 00:05:52.715 "subsystem": "bdev", 00:05:52.715 "config": [ 00:05:52.715 { 00:05:52.715 "params": { 00:05:52.715 "trtype": "pcie", 00:05:52.715 "traddr": "0000:00:10.0", 00:05:52.715 "name": "Nvme0" 00:05:52.715 }, 00:05:52.715 "method": "bdev_nvme_attach_controller" 00:05:52.715 }, 00:05:52.715 { 00:05:52.715 "method": "bdev_wait_for_examine" 00:05:52.715 } 00:05:52.715 ] 00:05:52.715 } 00:05:52.715 ] 00:05:52.715 } 00:05:52.715 [2024-11-20 08:18:40.270362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.013 [2024-11-20 08:18:40.327231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.013 [2024-11-20 08:18:40.380859] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.013  [2024-11-20T08:18:40.852Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:53.291 00:05:53.291 08:18:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:05:53.291 08:18:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:53.291 08:18:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:53.291 08:18:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:53.291 { 00:05:53.291 "subsystems": [ 00:05:53.291 { 00:05:53.291 "subsystem": "bdev", 00:05:53.291 "config": [ 00:05:53.291 { 00:05:53.291 "params": { 00:05:53.292 "trtype": "pcie", 00:05:53.292 "traddr": "0000:00:10.0", 00:05:53.292 "name": "Nvme0" 00:05:53.292 }, 00:05:53.292 "method": "bdev_nvme_attach_controller" 00:05:53.292 }, 00:05:53.292 { 00:05:53.292 "method": "bdev_wait_for_examine" 00:05:53.292 } 00:05:53.292 ] 00:05:53.292 } 00:05:53.292 ] 00:05:53.292 } 00:05:53.292 [2024-11-20 08:18:40.729468] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:53.292 [2024-11-20 08:18:40.729567] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59511 ] 00:05:53.549 [2024-11-20 08:18:40.876644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.549 [2024-11-20 08:18:40.922377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.550 [2024-11-20 08:18:40.976413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.550  [2024-11-20T08:18:41.368Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:53.807 00:05:53.807 08:18:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:53.807 08:18:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:53.807 08:18:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:53.807 08:18:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:53.807 08:18:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:53.807 08:18:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:53.807 08:18:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:53.807 08:18:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:53.807 08:18:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:53.807 08:18:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:53.807 08:18:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:53.807 [2024-11-20 08:18:41.335376] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:53.807 [2024-11-20 08:18:41.335470] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59532 ] 00:05:53.807 { 00:05:53.807 "subsystems": [ 00:05:53.807 { 00:05:53.807 "subsystem": "bdev", 00:05:53.807 "config": [ 00:05:53.807 { 00:05:53.807 "params": { 00:05:53.807 "trtype": "pcie", 00:05:53.807 "traddr": "0000:00:10.0", 00:05:53.807 "name": "Nvme0" 00:05:53.807 }, 00:05:53.807 "method": "bdev_nvme_attach_controller" 00:05:53.807 }, 00:05:53.807 { 00:05:53.807 "method": "bdev_wait_for_examine" 00:05:53.807 } 00:05:53.807 ] 00:05:53.807 } 00:05:53.807 ] 00:05:53.807 } 00:05:54.065 [2024-11-20 08:18:41.478609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.065 [2024-11-20 08:18:41.544388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.065 [2024-11-20 08:18:41.603462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.323  [2024-11-20T08:18:42.142Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:54.581 00:05:54.581 08:18:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:54.581 08:18:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:54.581 08:18:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:54.581 08:18:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:54.581 08:18:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:54.581 08:18:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:54.581 08:18:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:54.581 08:18:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:55.147 08:18:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:05:55.147 08:18:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:55.147 08:18:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:55.147 08:18:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:55.147 [2024-11-20 08:18:42.469354] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:55.147 [2024-11-20 08:18:42.469652] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59551 ] 00:05:55.147 { 00:05:55.147 "subsystems": [ 00:05:55.147 { 00:05:55.147 "subsystem": "bdev", 00:05:55.147 "config": [ 00:05:55.147 { 00:05:55.147 "params": { 00:05:55.147 "trtype": "pcie", 00:05:55.147 "traddr": "0000:00:10.0", 00:05:55.147 "name": "Nvme0" 00:05:55.147 }, 00:05:55.147 "method": "bdev_nvme_attach_controller" 00:05:55.147 }, 00:05:55.147 { 00:05:55.147 "method": "bdev_wait_for_examine" 00:05:55.147 } 00:05:55.147 ] 00:05:55.147 } 00:05:55.147 ] 00:05:55.147 } 00:05:55.147 [2024-11-20 08:18:42.613780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.147 [2024-11-20 08:18:42.670390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.405 [2024-11-20 08:18:42.725987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.405  [2024-11-20T08:18:43.223Z] Copying: 56/56 [kB] (average 27 MBps) 00:05:55.662 00:05:55.662 08:18:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:05:55.662 08:18:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:55.662 08:18:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:55.662 08:18:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:55.662 [2024-11-20 08:18:43.079102] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:55.662 [2024-11-20 08:18:43.079397] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59570 ] 00:05:55.662 { 00:05:55.662 "subsystems": [ 00:05:55.662 { 00:05:55.662 "subsystem": "bdev", 00:05:55.663 "config": [ 00:05:55.663 { 00:05:55.663 "params": { 00:05:55.663 "trtype": "pcie", 00:05:55.663 "traddr": "0000:00:10.0", 00:05:55.663 "name": "Nvme0" 00:05:55.663 }, 00:05:55.663 "method": "bdev_nvme_attach_controller" 00:05:55.663 }, 00:05:55.663 { 00:05:55.663 "method": "bdev_wait_for_examine" 00:05:55.663 } 00:05:55.663 ] 00:05:55.663 } 00:05:55.663 ] 00:05:55.663 } 00:05:55.920 [2024-11-20 08:18:43.224011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.920 [2024-11-20 08:18:43.287087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.920 [2024-11-20 08:18:43.342394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.920  [2024-11-20T08:18:43.739Z] Copying: 56/56 [kB] (average 27 MBps) 00:05:56.178 00:05:56.178 08:18:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:56.178 08:18:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:56.178 08:18:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:56.178 08:18:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:56.178 08:18:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:56.178 08:18:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:56.178 08:18:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:56.178 08:18:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:56.178 08:18:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:56.178 08:18:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:56.178 08:18:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:56.178 [2024-11-20 08:18:43.698647] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:56.178 [2024-11-20 08:18:43.699065] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59580 ] 00:05:56.178 { 00:05:56.178 "subsystems": [ 00:05:56.178 { 00:05:56.178 "subsystem": "bdev", 00:05:56.178 "config": [ 00:05:56.178 { 00:05:56.178 "params": { 00:05:56.178 "trtype": "pcie", 00:05:56.178 "traddr": "0000:00:10.0", 00:05:56.178 "name": "Nvme0" 00:05:56.178 }, 00:05:56.178 "method": "bdev_nvme_attach_controller" 00:05:56.178 }, 00:05:56.178 { 00:05:56.178 "method": "bdev_wait_for_examine" 00:05:56.178 } 00:05:56.178 ] 00:05:56.178 } 00:05:56.179 ] 00:05:56.179 } 00:05:56.436 [2024-11-20 08:18:43.839893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.436 [2024-11-20 08:18:43.907238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.436 [2024-11-20 08:18:43.966148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.694  [2024-11-20T08:18:44.513Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:56.952 00:05:56.952 08:18:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:56.952 08:18:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:56.952 08:18:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:56.952 08:18:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:56.952 08:18:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:56.952 08:18:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:56.952 08:18:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:57.518 08:18:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:05:57.518 08:18:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:57.518 08:18:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:57.518 08:18:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:57.518 [2024-11-20 08:18:44.873356] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:57.518 [2024-11-20 08:18:44.873605] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59605 ] 00:05:57.518 { 00:05:57.518 "subsystems": [ 00:05:57.518 { 00:05:57.518 "subsystem": "bdev", 00:05:57.518 "config": [ 00:05:57.518 { 00:05:57.518 "params": { 00:05:57.518 "trtype": "pcie", 00:05:57.518 "traddr": "0000:00:10.0", 00:05:57.518 "name": "Nvme0" 00:05:57.518 }, 00:05:57.518 "method": "bdev_nvme_attach_controller" 00:05:57.518 }, 00:05:57.518 { 00:05:57.518 "method": "bdev_wait_for_examine" 00:05:57.518 } 00:05:57.518 ] 00:05:57.518 } 00:05:57.518 ] 00:05:57.518 } 00:05:57.518 [2024-11-20 08:18:45.013157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.518 [2024-11-20 08:18:45.061556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.776 [2024-11-20 08:18:45.118571] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.776  [2024-11-20T08:18:45.595Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:58.034 00:05:58.034 08:18:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:05:58.034 08:18:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:58.034 08:18:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:58.034 08:18:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:58.034 [2024-11-20 08:18:45.472139] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:58.034 [2024-11-20 08:18:45.472255] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59618 ] 00:05:58.034 { 00:05:58.034 "subsystems": [ 00:05:58.034 { 00:05:58.034 "subsystem": "bdev", 00:05:58.034 "config": [ 00:05:58.034 { 00:05:58.034 "params": { 00:05:58.034 "trtype": "pcie", 00:05:58.034 "traddr": "0000:00:10.0", 00:05:58.034 "name": "Nvme0" 00:05:58.034 }, 00:05:58.034 "method": "bdev_nvme_attach_controller" 00:05:58.034 }, 00:05:58.034 { 00:05:58.034 "method": "bdev_wait_for_examine" 00:05:58.034 } 00:05:58.034 ] 00:05:58.034 } 00:05:58.034 ] 00:05:58.034 } 00:05:58.293 [2024-11-20 08:18:45.615605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.293 [2024-11-20 08:18:45.685191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.293 [2024-11-20 08:18:45.742883] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.551  [2024-11-20T08:18:46.112Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:58.551 00:05:58.551 08:18:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:58.551 08:18:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:58.551 08:18:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:58.551 08:18:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:58.551 08:18:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:58.551 08:18:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:58.551 08:18:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:58.551 08:18:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:58.551 08:18:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:58.551 08:18:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:58.551 08:18:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:58.551 { 00:05:58.551 "subsystems": [ 00:05:58.551 { 00:05:58.551 "subsystem": "bdev", 00:05:58.551 "config": [ 00:05:58.551 { 00:05:58.551 "params": { 00:05:58.551 "trtype": "pcie", 00:05:58.551 "traddr": "0000:00:10.0", 00:05:58.551 "name": "Nvme0" 00:05:58.551 }, 00:05:58.551 "method": "bdev_nvme_attach_controller" 00:05:58.551 }, 00:05:58.551 { 00:05:58.551 "method": "bdev_wait_for_examine" 00:05:58.551 } 00:05:58.551 ] 00:05:58.551 } 00:05:58.551 ] 00:05:58.551 } 00:05:58.551 [2024-11-20 08:18:46.101517] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:58.551 [2024-11-20 08:18:46.102006] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59639 ] 00:05:58.810 [2024-11-20 08:18:46.247996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.810 [2024-11-20 08:18:46.308560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.810 [2024-11-20 08:18:46.363934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.068  [2024-11-20T08:18:46.889Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:59.328 00:05:59.328 08:18:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:59.328 08:18:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:59.328 08:18:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:59.328 08:18:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:59.328 08:18:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:59.328 08:18:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:59.328 08:18:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:59.328 08:18:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:59.586 08:18:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:05:59.586 08:18:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:59.586 08:18:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:59.586 08:18:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:59.844 [2024-11-20 08:18:47.193008] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:05:59.844 [2024-11-20 08:18:47.193268] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59658 ] 00:05:59.844 { 00:05:59.844 "subsystems": [ 00:05:59.844 { 00:05:59.844 "subsystem": "bdev", 00:05:59.844 "config": [ 00:05:59.844 { 00:05:59.844 "params": { 00:05:59.844 "trtype": "pcie", 00:05:59.844 "traddr": "0000:00:10.0", 00:05:59.844 "name": "Nvme0" 00:05:59.844 }, 00:05:59.844 "method": "bdev_nvme_attach_controller" 00:05:59.844 }, 00:05:59.844 { 00:05:59.844 "method": "bdev_wait_for_examine" 00:05:59.844 } 00:05:59.844 ] 00:05:59.844 } 00:05:59.844 ] 00:05:59.844 } 00:05:59.844 [2024-11-20 08:18:47.341319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.844 [2024-11-20 08:18:47.404382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.113 [2024-11-20 08:18:47.459483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.113  [2024-11-20T08:18:47.946Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:00.385 00:06:00.385 08:18:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:00.385 08:18:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:00.385 08:18:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:00.385 08:18:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:00.385 { 00:06:00.385 "subsystems": [ 00:06:00.385 { 00:06:00.385 "subsystem": "bdev", 00:06:00.385 "config": [ 00:06:00.385 { 00:06:00.385 "params": { 00:06:00.385 "trtype": "pcie", 00:06:00.385 "traddr": "0000:00:10.0", 00:06:00.385 "name": "Nvme0" 00:06:00.385 }, 00:06:00.385 "method": "bdev_nvme_attach_controller" 00:06:00.385 }, 00:06:00.385 { 00:06:00.385 "method": "bdev_wait_for_examine" 00:06:00.385 } 00:06:00.385 ] 00:06:00.385 } 00:06:00.385 ] 00:06:00.385 } 00:06:00.385 [2024-11-20 08:18:47.825326] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:00.385 [2024-11-20 08:18:47.825421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59666 ] 00:06:00.644 [2024-11-20 08:18:47.977286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.644 [2024-11-20 08:18:48.039464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.644 [2024-11-20 08:18:48.098831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.903  [2024-11-20T08:18:48.464Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:00.903 00:06:00.903 08:18:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:00.903 08:18:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:00.903 08:18:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:00.903 08:18:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:00.903 08:18:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:00.903 08:18:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:00.903 08:18:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:00.903 08:18:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:00.903 08:18:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:00.903 08:18:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:00.903 08:18:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:00.903 [2024-11-20 08:18:48.456147] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:00.903 [2024-11-20 08:18:48.456407] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59687 ] 00:06:00.903 { 00:06:00.903 "subsystems": [ 00:06:00.903 { 00:06:00.903 "subsystem": "bdev", 00:06:00.903 "config": [ 00:06:00.903 { 00:06:00.903 "params": { 00:06:00.903 "trtype": "pcie", 00:06:00.903 "traddr": "0000:00:10.0", 00:06:00.903 "name": "Nvme0" 00:06:00.903 }, 00:06:00.903 "method": "bdev_nvme_attach_controller" 00:06:00.903 }, 00:06:00.903 { 00:06:00.903 "method": "bdev_wait_for_examine" 00:06:00.903 } 00:06:00.903 ] 00:06:00.903 } 00:06:00.903 ] 00:06:00.903 } 00:06:01.161 [2024-11-20 08:18:48.602539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.161 [2024-11-20 08:18:48.663298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.161 [2024-11-20 08:18:48.719536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.420  [2024-11-20T08:18:49.239Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:01.678 00:06:01.678 08:18:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:01.678 08:18:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:01.678 08:18:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:01.678 08:18:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:01.678 08:18:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:01.678 08:18:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:01.678 08:18:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:02.244 08:18:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:02.244 08:18:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:02.244 08:18:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:02.244 08:18:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:02.244 [2024-11-20 08:18:49.547579] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:02.244 [2024-11-20 08:18:49.547894] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59706 ] 00:06:02.244 { 00:06:02.244 "subsystems": [ 00:06:02.244 { 00:06:02.244 "subsystem": "bdev", 00:06:02.244 "config": [ 00:06:02.244 { 00:06:02.244 "params": { 00:06:02.244 "trtype": "pcie", 00:06:02.244 "traddr": "0000:00:10.0", 00:06:02.244 "name": "Nvme0" 00:06:02.244 }, 00:06:02.244 "method": "bdev_nvme_attach_controller" 00:06:02.244 }, 00:06:02.244 { 00:06:02.245 "method": "bdev_wait_for_examine" 00:06:02.245 } 00:06:02.245 ] 00:06:02.245 } 00:06:02.245 ] 00:06:02.245 } 00:06:02.245 [2024-11-20 08:18:49.693304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.245 [2024-11-20 08:18:49.753745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.503 [2024-11-20 08:18:49.809424] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.503  [2024-11-20T08:18:50.322Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:02.761 00:06:02.761 08:18:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:02.761 08:18:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:02.761 08:18:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:02.761 08:18:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:02.761 [2024-11-20 08:18:50.173714] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:02.761 [2024-11-20 08:18:50.173872] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59725 ] 00:06:02.761 { 00:06:02.761 "subsystems": [ 00:06:02.761 { 00:06:02.761 "subsystem": "bdev", 00:06:02.761 "config": [ 00:06:02.761 { 00:06:02.761 "params": { 00:06:02.761 "trtype": "pcie", 00:06:02.761 "traddr": "0000:00:10.0", 00:06:02.761 "name": "Nvme0" 00:06:02.761 }, 00:06:02.761 "method": "bdev_nvme_attach_controller" 00:06:02.761 }, 00:06:02.761 { 00:06:02.761 "method": "bdev_wait_for_examine" 00:06:02.761 } 00:06:02.761 ] 00:06:02.761 } 00:06:02.761 ] 00:06:02.761 } 00:06:03.020 [2024-11-20 08:18:50.323275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.020 [2024-11-20 08:18:50.376641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.020 [2024-11-20 08:18:50.434934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.020  [2024-11-20T08:18:50.841Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:03.280 00:06:03.280 08:18:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:03.280 08:18:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:03.280 08:18:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:03.280 08:18:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:03.280 08:18:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:03.280 08:18:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:03.280 08:18:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:03.280 08:18:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:03.280 08:18:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:03.280 08:18:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:03.280 08:18:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:03.280 { 00:06:03.280 "subsystems": [ 00:06:03.280 { 00:06:03.280 "subsystem": "bdev", 00:06:03.280 "config": [ 00:06:03.280 { 00:06:03.280 "params": { 00:06:03.280 "trtype": "pcie", 00:06:03.280 "traddr": "0000:00:10.0", 00:06:03.280 "name": "Nvme0" 00:06:03.280 }, 00:06:03.280 "method": "bdev_nvme_attach_controller" 00:06:03.280 }, 00:06:03.280 { 00:06:03.280 "method": "bdev_wait_for_examine" 00:06:03.280 } 00:06:03.280 ] 00:06:03.280 } 00:06:03.280 ] 00:06:03.280 } 00:06:03.280 [2024-11-20 08:18:50.820167] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:03.280 [2024-11-20 08:18:50.820504] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59735 ] 00:06:03.540 [2024-11-20 08:18:50.966453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.540 [2024-11-20 08:18:51.026581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.540 [2024-11-20 08:18:51.083171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.799  [2024-11-20T08:18:51.619Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:04.058 00:06:04.058 ************************************ 00:06:04.058 END TEST dd_rw 00:06:04.058 ************************************ 00:06:04.058 00:06:04.058 real 0m14.430s 00:06:04.058 user 0m10.451s 00:06:04.058 sys 0m5.580s 00:06:04.058 08:18:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:04.058 08:18:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:04.058 08:18:51 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:04.058 08:18:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:04.058 08:18:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:04.058 08:18:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:04.058 ************************************ 00:06:04.058 START TEST dd_rw_offset 00:06:04.058 ************************************ 00:06:04.058 08:18:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1132 -- # basic_offset 00:06:04.058 08:18:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:04.058 08:18:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:04.058 08:18:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:04.058 08:18:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:04.058 08:18:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:04.058 08:18:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=h0wrl4bqfuzbw3suygwcxb1graln1fa5wpcuo7lzsmn6beismgk0v3pcx5fnp98ugkvgr2fmz2bd7lsj3f2eiu2zb9qz86svg040nnnpjbmh1mje81rim76b1owecq5sofihde1wqdvqozhj6fwcg2chuy5fnqk4ya79yoxauob8i4n2wrew1zribxsxhzgruo5r0lef4jtkg65bv1doxcbch99vap3uhatdhzyzj5eccm5qnbufjawkdp1rwbhy6uhxumyva6yukzl3ajft9ng5edcc7byizs3wfoxiub0ovi2q3irbuq92xpug0v55iarj03helve6ij8jma9tvk8z1k2w7q8kkmf3l4sashwp0khh84xi0k3xf8cfvt85v0djblhvis6a6xiadf0wsafy8p3ysk3f0x78gb2x0505drfaumi7z4jv2bgdevnv344pluitxkoeheb3oufkajlp9r8hq8h2m5fw0ulocfbnwjeltr3zlo5vl59ermn8xmo8dp7xr2hhmkifg486f8yp687kojfd8ivaur1pr3zh4i9608oas50jsc8fv2ykyk8kunw971uvk99i9x0xk1chl4eu6kddgm01jtiq5321f441a6vxg99ysn0fyj0qzkbq91wowsxftntai41vmivha529ifu03mdvc7fr85gvenai9mgyi1w737mqp28gsnhm25b966fi1sp0oo5r5t2v6lttrzee5fg3t231r1t3vkjxdizc2v5kekxd99q64g2qjl3zndpj6s9fj1xl6iio9gfcr341pf3ca5jpum9d5iul2k13kpvwqocei5d20eukqqwmkb5s6pjm8420prm7lwgqr3jmyr6aoiezhl60oxxnvsb5dycq0mwsnuywvxa4amsh39pa083b32hhqsz27oi0c00e3tovs059x686b5fkurmz3lul7drvwdaba0mp08hx9k7ucaxllzwkbyyu83ikugl2q5ulz9r0jru4k4qju2mzc2qksrlvx20d5bl14mt543wy4j54we1825a10zbhyxstv39ul5p7ex6644d0x7ls5yo25sno2ybeuph1la0wtn6lmh5waye846e6280bvilo7iadnx9cldlc1v0qzocxkxe6xyum74xs7e15yqfuhrqflv8qy4vfphg9sl9cgn7yfqb7jyaoqysqe0fi9qy8plxt227thrnf295584etyqwbi85ozm8xs6bo3aorbap9ynlrg088iegzvnyg6oslfu93h0q5wovd8i4mubqui7v4xcpperlbv39cpdm94fch33f58kqch9zee0d7s2r5tfimua4qufistgupogdiyv3s5wjch9fotqfxfz7xiaeaum5utsbt8b2vakwj0o8c2xjedau62yrclqyqbnkhj8s7xz890mp6drbcbvetcpwa20ubi13folkh5rg1mmiplcvq20phrfpwjira07j87wn6f9kbylo86pnzifny0kae2lsybgvbetih3jtz0wpcx6qnz6uezymn4sp12idulq6dz1fcv7evbob534ur5gi0m0ogo0e3iyv0jq7e5hg68arzj02uzm6kf4872uadz70wwfnrs9lp9m98k6jv2bo3ulyso3yrzh9vvx82z1tlopvltjklm570syeacr308nv1tn4kjdiderozom4ay9y8lpgzo87f1s2v5h8x4bznhjzqmcwb77o13otjrrfo7x5jtqdkh8l4inbkbipeorixyfv9vlvawzw0hq45z8a5d7krut7275jhdihuhc3obae19iz52hzsdo5hd7akgn2ktm4483fixp1kidkgj511u1z8f4spady6m64boeqqz5jlfdsm5h8onxz4ju8ps25135961ilmufkwieegyajof0s79mvm6i98i27ynvqx3o0brw4033043b9acvabn2kq0cwgd5mhthw85oo7z3nzrruwqnxk2txdobn8tuqr11g0whwcosk3h6gjomnuu5w7fe1y3lms1ndb50tp4fi9j9kar48n3yimemgxfycy42rvmt1cpljrpifva7jjl4qs3gpbrdt8vahkecllsuvp18g8q9f5lhjztq6toj5k1rq4kk8iibeyfepvqjw0g11ohogrkg5krse82qxqcag9m0k7i5rhdtg80r9lnwyeqjzatrso5d5sdl8aycv8rqhqhoidmdabmcq2g3oyf0oiyiupddnp91z11yxqij8lg6sb646dsiyfwtth6rzi0lls3iecshwnmjls737w24z8615y69lh43rd18kxk2rfmexiuxdpuhj6b13frjg5mc452jit539opxy84z58wfy1x5ozdulvxmxep2c08hbbyjdtb9k26lfdlt8exlknyptpkbpua9kptlj15g211pjkmqk7brpncjoy97r1ff7lijoy7aaon5qkopcfrnai72ycdj9vmw7o9zhluefstb2on93ghl60461uop2jdqw0n0eq3zxuzdimozie0w0pxsa6zq7cd3kr9sv1jjx854907z6xkdlcpulfhyk37gj3l3mv6wirxfjb9cycewdooy7zbxjkfc1h0qwr5ge7zx7zkdgonlpq3pf733wti465rbhvcq4r0h5m7wqje0178gmn7iw7h5pq7ah0lh808stvqb47ea66k57rk2oofshjcv5zbbiskdenebkhuom8nxc6zhddwpvb4n2ife36cwnig5dc89o6mwh36hc1bzpq6svzgvtkr3q8mbts2xbl368umygbjnoi1rrwfl4r1khozzibvoxclbp2ykh16galrjqlmy4myhmiqweemgp21fc7j1odl3697lyg13qhjrtadclkvuo50eo4slgh7uozz5cdjzv3w8c6wo28c3pztvesndbyftsf808ntl5qdmjpw1an67mn3lpnh7m1hcg6oevcilsy6hsbuvg61x2ewwy6hh0fqxu7hhx1lydrovehsi57xmwv0f7m9x808f81s30dvin697ftlp1vbr363zlex13xruye68bt5xjtv34360rxx21yi6s76kc1eat4tdrwk8ex2ftu9zgrgz33ucolbhx0d2ylfb04dzh5oeu3jjw28r7sxajlsi07v9upk5ah4a34l74titb80uvmb40fzod7yyllcdwuv08s7bjzt2831lbe4arv6jt8f1uxdz62wcf2wymqu7l53r76vf3k9lueobkazkgllvelzn71y2a8ssd2l1zrs71ml75wtc6vqfv71j9pmvropyoxpael8wy0pum70lux29takiq7fo96srqf7ag6d0rz3tk0af52jwqosvnrqp88m0y8xbenyriyv38rozltn303toer616th2q0huwhn4uelc62hy34qv98dcxc4kunp18q2a8n6y6bzmc7t431bqf07mj43wshxsjqtps5dbhetz9nf75wak52vsql2ly6t8g1bdnlygqquk13xo1m9b5lkfcph8v0inn3408jp89uohj8w5lvvvdli47a48yhhgjgp7qz8d9vkdmodyjqj0ca72q9c1l6k5befvdww1qzunkpszjb70xyxinksrocx5674x4o85jb6i30rhqtagvdghdjd5a192kc7f5thl4y4hk4dlgtshotu15awmlysl2d09ka9ft9f1c0dkn4tnmwj61exx4kcxc4q9s88503y1aax6g1g0xiacph9nvit5k7g6xqkfaa0dxcgjjoc3ny0xsw1v6zun0d24uhfx8t7ytmbugvolih56ht0oj7uma6oqtpoyxx0ee8qnobhrhjvntuajqtqgaldjcwqoitraij8byvk34bj4kdwiz4pr2ehtgrd5jwtm6tpl1zeiefc0j0o2re7dbkie10az4zi752y2845ydbgvx89ufj911jfe216f5n1ixzhhdxa0skrnwujojk44iywctx58llf88he43sx54kdyaoeuqct899g0hq75wm6s3zfucs9yvfdvg2vy8zj5qu15esss6piyylp2ftzoawq8zfz0iocx71xak1l10j0z474v3msr579w3qjcmuw4ijnwf3 00:06:04.058 08:18:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:04.058 08:18:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:04.058 08:18:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:04.058 08:18:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:04.058 [2024-11-20 08:18:51.536384] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:04.058 [2024-11-20 08:18:51.536689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59771 ] 00:06:04.058 { 00:06:04.058 "subsystems": [ 00:06:04.058 { 00:06:04.058 "subsystem": "bdev", 00:06:04.058 "config": [ 00:06:04.058 { 00:06:04.058 "params": { 00:06:04.058 "trtype": "pcie", 00:06:04.058 "traddr": "0000:00:10.0", 00:06:04.058 "name": "Nvme0" 00:06:04.058 }, 00:06:04.058 "method": "bdev_nvme_attach_controller" 00:06:04.058 }, 00:06:04.058 { 00:06:04.058 "method": "bdev_wait_for_examine" 00:06:04.058 } 00:06:04.058 ] 00:06:04.058 } 00:06:04.058 ] 00:06:04.058 } 00:06:04.317 [2024-11-20 08:18:51.689263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.317 [2024-11-20 08:18:51.741822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.317 [2024-11-20 08:18:51.794293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:04.576  [2024-11-20T08:18:52.137Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:04.576 00:06:04.576 08:18:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:04.576 08:18:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:04.576 08:18:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:04.576 08:18:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:04.834 { 00:06:04.834 "subsystems": [ 00:06:04.834 { 00:06:04.834 "subsystem": "bdev", 00:06:04.834 "config": [ 00:06:04.834 { 00:06:04.834 "params": { 00:06:04.834 "trtype": "pcie", 00:06:04.834 "traddr": "0000:00:10.0", 00:06:04.834 "name": "Nvme0" 00:06:04.834 }, 00:06:04.834 "method": "bdev_nvme_attach_controller" 00:06:04.834 }, 00:06:04.834 { 00:06:04.834 "method": "bdev_wait_for_examine" 00:06:04.834 } 00:06:04.834 ] 00:06:04.834 } 00:06:04.834 ] 00:06:04.834 } 00:06:04.834 [2024-11-20 08:18:52.145963] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:04.834 [2024-11-20 08:18:52.146059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59785 ] 00:06:04.834 [2024-11-20 08:18:52.292816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.834 [2024-11-20 08:18:52.350054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.093 [2024-11-20 08:18:52.406280] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.093  [2024-11-20T08:18:52.914Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:05.353 00:06:05.353 08:18:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:05.353 ************************************ 00:06:05.353 END TEST dd_rw_offset 00:06:05.353 ************************************ 00:06:05.354 08:18:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ h0wrl4bqfuzbw3suygwcxb1graln1fa5wpcuo7lzsmn6beismgk0v3pcx5fnp98ugkvgr2fmz2bd7lsj3f2eiu2zb9qz86svg040nnnpjbmh1mje81rim76b1owecq5sofihde1wqdvqozhj6fwcg2chuy5fnqk4ya79yoxauob8i4n2wrew1zribxsxhzgruo5r0lef4jtkg65bv1doxcbch99vap3uhatdhzyzj5eccm5qnbufjawkdp1rwbhy6uhxumyva6yukzl3ajft9ng5edcc7byizs3wfoxiub0ovi2q3irbuq92xpug0v55iarj03helve6ij8jma9tvk8z1k2w7q8kkmf3l4sashwp0khh84xi0k3xf8cfvt85v0djblhvis6a6xiadf0wsafy8p3ysk3f0x78gb2x0505drfaumi7z4jv2bgdevnv344pluitxkoeheb3oufkajlp9r8hq8h2m5fw0ulocfbnwjeltr3zlo5vl59ermn8xmo8dp7xr2hhmkifg486f8yp687kojfd8ivaur1pr3zh4i9608oas50jsc8fv2ykyk8kunw971uvk99i9x0xk1chl4eu6kddgm01jtiq5321f441a6vxg99ysn0fyj0qzkbq91wowsxftntai41vmivha529ifu03mdvc7fr85gvenai9mgyi1w737mqp28gsnhm25b966fi1sp0oo5r5t2v6lttrzee5fg3t231r1t3vkjxdizc2v5kekxd99q64g2qjl3zndpj6s9fj1xl6iio9gfcr341pf3ca5jpum9d5iul2k13kpvwqocei5d20eukqqwmkb5s6pjm8420prm7lwgqr3jmyr6aoiezhl60oxxnvsb5dycq0mwsnuywvxa4amsh39pa083b32hhqsz27oi0c00e3tovs059x686b5fkurmz3lul7drvwdaba0mp08hx9k7ucaxllzwkbyyu83ikugl2q5ulz9r0jru4k4qju2mzc2qksrlvx20d5bl14mt543wy4j54we1825a10zbhyxstv39ul5p7ex6644d0x7ls5yo25sno2ybeuph1la0wtn6lmh5waye846e6280bvilo7iadnx9cldlc1v0qzocxkxe6xyum74xs7e15yqfuhrqflv8qy4vfphg9sl9cgn7yfqb7jyaoqysqe0fi9qy8plxt227thrnf295584etyqwbi85ozm8xs6bo3aorbap9ynlrg088iegzvnyg6oslfu93h0q5wovd8i4mubqui7v4xcpperlbv39cpdm94fch33f58kqch9zee0d7s2r5tfimua4qufistgupogdiyv3s5wjch9fotqfxfz7xiaeaum5utsbt8b2vakwj0o8c2xjedau62yrclqyqbnkhj8s7xz890mp6drbcbvetcpwa20ubi13folkh5rg1mmiplcvq20phrfpwjira07j87wn6f9kbylo86pnzifny0kae2lsybgvbetih3jtz0wpcx6qnz6uezymn4sp12idulq6dz1fcv7evbob534ur5gi0m0ogo0e3iyv0jq7e5hg68arzj02uzm6kf4872uadz70wwfnrs9lp9m98k6jv2bo3ulyso3yrzh9vvx82z1tlopvltjklm570syeacr308nv1tn4kjdiderozom4ay9y8lpgzo87f1s2v5h8x4bznhjzqmcwb77o13otjrrfo7x5jtqdkh8l4inbkbipeorixyfv9vlvawzw0hq45z8a5d7krut7275jhdihuhc3obae19iz52hzsdo5hd7akgn2ktm4483fixp1kidkgj511u1z8f4spady6m64boeqqz5jlfdsm5h8onxz4ju8ps25135961ilmufkwieegyajof0s79mvm6i98i27ynvqx3o0brw4033043b9acvabn2kq0cwgd5mhthw85oo7z3nzrruwqnxk2txdobn8tuqr11g0whwcosk3h6gjomnuu5w7fe1y3lms1ndb50tp4fi9j9kar48n3yimemgxfycy42rvmt1cpljrpifva7jjl4qs3gpbrdt8vahkecllsuvp18g8q9f5lhjztq6toj5k1rq4kk8iibeyfepvqjw0g11ohogrkg5krse82qxqcag9m0k7i5rhdtg80r9lnwyeqjzatrso5d5sdl8aycv8rqhqhoidmdabmcq2g3oyf0oiyiupddnp91z11yxqij8lg6sb646dsiyfwtth6rzi0lls3iecshwnmjls737w24z8615y69lh43rd18kxk2rfmexiuxdpuhj6b13frjg5mc452jit539opxy84z58wfy1x5ozdulvxmxep2c08hbbyjdtb9k26lfdlt8exlknyptpkbpua9kptlj15g211pjkmqk7brpncjoy97r1ff7lijoy7aaon5qkopcfrnai72ycdj9vmw7o9zhluefstb2on93ghl60461uop2jdqw0n0eq3zxuzdimozie0w0pxsa6zq7cd3kr9sv1jjx854907z6xkdlcpulfhyk37gj3l3mv6wirxfjb9cycewdooy7zbxjkfc1h0qwr5ge7zx7zkdgonlpq3pf733wti465rbhvcq4r0h5m7wqje0178gmn7iw7h5pq7ah0lh808stvqb47ea66k57rk2oofshjcv5zbbiskdenebkhuom8nxc6zhddwpvb4n2ife36cwnig5dc89o6mwh36hc1bzpq6svzgvtkr3q8mbts2xbl368umygbjnoi1rrwfl4r1khozzibvoxclbp2ykh16galrjqlmy4myhmiqweemgp21fc7j1odl3697lyg13qhjrtadclkvuo50eo4slgh7uozz5cdjzv3w8c6wo28c3pztvesndbyftsf808ntl5qdmjpw1an67mn3lpnh7m1hcg6oevcilsy6hsbuvg61x2ewwy6hh0fqxu7hhx1lydrovehsi57xmwv0f7m9x808f81s30dvin697ftlp1vbr363zlex13xruye68bt5xjtv34360rxx21yi6s76kc1eat4tdrwk8ex2ftu9zgrgz33ucolbhx0d2ylfb04dzh5oeu3jjw28r7sxajlsi07v9upk5ah4a34l74titb80uvmb40fzod7yyllcdwuv08s7bjzt2831lbe4arv6jt8f1uxdz62wcf2wymqu7l53r76vf3k9lueobkazkgllvelzn71y2a8ssd2l1zrs71ml75wtc6vqfv71j9pmvropyoxpael8wy0pum70lux29takiq7fo96srqf7ag6d0rz3tk0af52jwqosvnrqp88m0y8xbenyriyv38rozltn303toer616th2q0huwhn4uelc62hy34qv98dcxc4kunp18q2a8n6y6bzmc7t431bqf07mj43wshxsjqtps5dbhetz9nf75wak52vsql2ly6t8g1bdnlygqquk13xo1m9b5lkfcph8v0inn3408jp89uohj8w5lvvvdli47a48yhhgjgp7qz8d9vkdmodyjqj0ca72q9c1l6k5befvdww1qzunkpszjb70xyxinksrocx5674x4o85jb6i30rhqtagvdghdjd5a192kc7f5thl4y4hk4dlgtshotu15awmlysl2d09ka9ft9f1c0dkn4tnmwj61exx4kcxc4q9s88503y1aax6g1g0xiacph9nvit5k7g6xqkfaa0dxcgjjoc3ny0xsw1v6zun0d24uhfx8t7ytmbugvolih56ht0oj7uma6oqtpoyxx0ee8qnobhrhjvntuajqtqgaldjcwqoitraij8byvk34bj4kdwiz4pr2ehtgrd5jwtm6tpl1zeiefc0j0o2re7dbkie10az4zi752y2845ydbgvx89ufj911jfe216f5n1ixzhhdxa0skrnwujojk44iywctx58llf88he43sx54kdyaoeuqct899g0hq75wm6s3zfucs9yvfdvg2vy8zj5qu15esss6piyylp2ftzoawq8zfz0iocx71xak1l10j0z474v3msr579w3qjcmuw4ijnwf3 == \h\0\w\r\l\4\b\q\f\u\z\b\w\3\s\u\y\g\w\c\x\b\1\g\r\a\l\n\1\f\a\5\w\p\c\u\o\7\l\z\s\m\n\6\b\e\i\s\m\g\k\0\v\3\p\c\x\5\f\n\p\9\8\u\g\k\v\g\r\2\f\m\z\2\b\d\7\l\s\j\3\f\2\e\i\u\2\z\b\9\q\z\8\6\s\v\g\0\4\0\n\n\n\p\j\b\m\h\1\m\j\e\8\1\r\i\m\7\6\b\1\o\w\e\c\q\5\s\o\f\i\h\d\e\1\w\q\d\v\q\o\z\h\j\6\f\w\c\g\2\c\h\u\y\5\f\n\q\k\4\y\a\7\9\y\o\x\a\u\o\b\8\i\4\n\2\w\r\e\w\1\z\r\i\b\x\s\x\h\z\g\r\u\o\5\r\0\l\e\f\4\j\t\k\g\6\5\b\v\1\d\o\x\c\b\c\h\9\9\v\a\p\3\u\h\a\t\d\h\z\y\z\j\5\e\c\c\m\5\q\n\b\u\f\j\a\w\k\d\p\1\r\w\b\h\y\6\u\h\x\u\m\y\v\a\6\y\u\k\z\l\3\a\j\f\t\9\n\g\5\e\d\c\c\7\b\y\i\z\s\3\w\f\o\x\i\u\b\0\o\v\i\2\q\3\i\r\b\u\q\9\2\x\p\u\g\0\v\5\5\i\a\r\j\0\3\h\e\l\v\e\6\i\j\8\j\m\a\9\t\v\k\8\z\1\k\2\w\7\q\8\k\k\m\f\3\l\4\s\a\s\h\w\p\0\k\h\h\8\4\x\i\0\k\3\x\f\8\c\f\v\t\8\5\v\0\d\j\b\l\h\v\i\s\6\a\6\x\i\a\d\f\0\w\s\a\f\y\8\p\3\y\s\k\3\f\0\x\7\8\g\b\2\x\0\5\0\5\d\r\f\a\u\m\i\7\z\4\j\v\2\b\g\d\e\v\n\v\3\4\4\p\l\u\i\t\x\k\o\e\h\e\b\3\o\u\f\k\a\j\l\p\9\r\8\h\q\8\h\2\m\5\f\w\0\u\l\o\c\f\b\n\w\j\e\l\t\r\3\z\l\o\5\v\l\5\9\e\r\m\n\8\x\m\o\8\d\p\7\x\r\2\h\h\m\k\i\f\g\4\8\6\f\8\y\p\6\8\7\k\o\j\f\d\8\i\v\a\u\r\1\p\r\3\z\h\4\i\9\6\0\8\o\a\s\5\0\j\s\c\8\f\v\2\y\k\y\k\8\k\u\n\w\9\7\1\u\v\k\9\9\i\9\x\0\x\k\1\c\h\l\4\e\u\6\k\d\d\g\m\0\1\j\t\i\q\5\3\2\1\f\4\4\1\a\6\v\x\g\9\9\y\s\n\0\f\y\j\0\q\z\k\b\q\9\1\w\o\w\s\x\f\t\n\t\a\i\4\1\v\m\i\v\h\a\5\2\9\i\f\u\0\3\m\d\v\c\7\f\r\8\5\g\v\e\n\a\i\9\m\g\y\i\1\w\7\3\7\m\q\p\2\8\g\s\n\h\m\2\5\b\9\6\6\f\i\1\s\p\0\o\o\5\r\5\t\2\v\6\l\t\t\r\z\e\e\5\f\g\3\t\2\3\1\r\1\t\3\v\k\j\x\d\i\z\c\2\v\5\k\e\k\x\d\9\9\q\6\4\g\2\q\j\l\3\z\n\d\p\j\6\s\9\f\j\1\x\l\6\i\i\o\9\g\f\c\r\3\4\1\p\f\3\c\a\5\j\p\u\m\9\d\5\i\u\l\2\k\1\3\k\p\v\w\q\o\c\e\i\5\d\2\0\e\u\k\q\q\w\m\k\b\5\s\6\p\j\m\8\4\2\0\p\r\m\7\l\w\g\q\r\3\j\m\y\r\6\a\o\i\e\z\h\l\6\0\o\x\x\n\v\s\b\5\d\y\c\q\0\m\w\s\n\u\y\w\v\x\a\4\a\m\s\h\3\9\p\a\0\8\3\b\3\2\h\h\q\s\z\2\7\o\i\0\c\0\0\e\3\t\o\v\s\0\5\9\x\6\8\6\b\5\f\k\u\r\m\z\3\l\u\l\7\d\r\v\w\d\a\b\a\0\m\p\0\8\h\x\9\k\7\u\c\a\x\l\l\z\w\k\b\y\y\u\8\3\i\k\u\g\l\2\q\5\u\l\z\9\r\0\j\r\u\4\k\4\q\j\u\2\m\z\c\2\q\k\s\r\l\v\x\2\0\d\5\b\l\1\4\m\t\5\4\3\w\y\4\j\5\4\w\e\1\8\2\5\a\1\0\z\b\h\y\x\s\t\v\3\9\u\l\5\p\7\e\x\6\6\4\4\d\0\x\7\l\s\5\y\o\2\5\s\n\o\2\y\b\e\u\p\h\1\l\a\0\w\t\n\6\l\m\h\5\w\a\y\e\8\4\6\e\6\2\8\0\b\v\i\l\o\7\i\a\d\n\x\9\c\l\d\l\c\1\v\0\q\z\o\c\x\k\x\e\6\x\y\u\m\7\4\x\s\7\e\1\5\y\q\f\u\h\r\q\f\l\v\8\q\y\4\v\f\p\h\g\9\s\l\9\c\g\n\7\y\f\q\b\7\j\y\a\o\q\y\s\q\e\0\f\i\9\q\y\8\p\l\x\t\2\2\7\t\h\r\n\f\2\9\5\5\8\4\e\t\y\q\w\b\i\8\5\o\z\m\8\x\s\6\b\o\3\a\o\r\b\a\p\9\y\n\l\r\g\0\8\8\i\e\g\z\v\n\y\g\6\o\s\l\f\u\9\3\h\0\q\5\w\o\v\d\8\i\4\m\u\b\q\u\i\7\v\4\x\c\p\p\e\r\l\b\v\3\9\c\p\d\m\9\4\f\c\h\3\3\f\5\8\k\q\c\h\9\z\e\e\0\d\7\s\2\r\5\t\f\i\m\u\a\4\q\u\f\i\s\t\g\u\p\o\g\d\i\y\v\3\s\5\w\j\c\h\9\f\o\t\q\f\x\f\z\7\x\i\a\e\a\u\m\5\u\t\s\b\t\8\b\2\v\a\k\w\j\0\o\8\c\2\x\j\e\d\a\u\6\2\y\r\c\l\q\y\q\b\n\k\h\j\8\s\7\x\z\8\9\0\m\p\6\d\r\b\c\b\v\e\t\c\p\w\a\2\0\u\b\i\1\3\f\o\l\k\h\5\r\g\1\m\m\i\p\l\c\v\q\2\0\p\h\r\f\p\w\j\i\r\a\0\7\j\8\7\w\n\6\f\9\k\b\y\l\o\8\6\p\n\z\i\f\n\y\0\k\a\e\2\l\s\y\b\g\v\b\e\t\i\h\3\j\t\z\0\w\p\c\x\6\q\n\z\6\u\e\z\y\m\n\4\s\p\1\2\i\d\u\l\q\6\d\z\1\f\c\v\7\e\v\b\o\b\5\3\4\u\r\5\g\i\0\m\0\o\g\o\0\e\3\i\y\v\0\j\q\7\e\5\h\g\6\8\a\r\z\j\0\2\u\z\m\6\k\f\4\8\7\2\u\a\d\z\7\0\w\w\f\n\r\s\9\l\p\9\m\9\8\k\6\j\v\2\b\o\3\u\l\y\s\o\3\y\r\z\h\9\v\v\x\8\2\z\1\t\l\o\p\v\l\t\j\k\l\m\5\7\0\s\y\e\a\c\r\3\0\8\n\v\1\t\n\4\k\j\d\i\d\e\r\o\z\o\m\4\a\y\9\y\8\l\p\g\z\o\8\7\f\1\s\2\v\5\h\8\x\4\b\z\n\h\j\z\q\m\c\w\b\7\7\o\1\3\o\t\j\r\r\f\o\7\x\5\j\t\q\d\k\h\8\l\4\i\n\b\k\b\i\p\e\o\r\i\x\y\f\v\9\v\l\v\a\w\z\w\0\h\q\4\5\z\8\a\5\d\7\k\r\u\t\7\2\7\5\j\h\d\i\h\u\h\c\3\o\b\a\e\1\9\i\z\5\2\h\z\s\d\o\5\h\d\7\a\k\g\n\2\k\t\m\4\4\8\3\f\i\x\p\1\k\i\d\k\g\j\5\1\1\u\1\z\8\f\4\s\p\a\d\y\6\m\6\4\b\o\e\q\q\z\5\j\l\f\d\s\m\5\h\8\o\n\x\z\4\j\u\8\p\s\2\5\1\3\5\9\6\1\i\l\m\u\f\k\w\i\e\e\g\y\a\j\o\f\0\s\7\9\m\v\m\6\i\9\8\i\2\7\y\n\v\q\x\3\o\0\b\r\w\4\0\3\3\0\4\3\b\9\a\c\v\a\b\n\2\k\q\0\c\w\g\d\5\m\h\t\h\w\8\5\o\o\7\z\3\n\z\r\r\u\w\q\n\x\k\2\t\x\d\o\b\n\8\t\u\q\r\1\1\g\0\w\h\w\c\o\s\k\3\h\6\g\j\o\m\n\u\u\5\w\7\f\e\1\y\3\l\m\s\1\n\d\b\5\0\t\p\4\f\i\9\j\9\k\a\r\4\8\n\3\y\i\m\e\m\g\x\f\y\c\y\4\2\r\v\m\t\1\c\p\l\j\r\p\i\f\v\a\7\j\j\l\4\q\s\3\g\p\b\r\d\t\8\v\a\h\k\e\c\l\l\s\u\v\p\1\8\g\8\q\9\f\5\l\h\j\z\t\q\6\t\o\j\5\k\1\r\q\4\k\k\8\i\i\b\e\y\f\e\p\v\q\j\w\0\g\1\1\o\h\o\g\r\k\g\5\k\r\s\e\8\2\q\x\q\c\a\g\9\m\0\k\7\i\5\r\h\d\t\g\8\0\r\9\l\n\w\y\e\q\j\z\a\t\r\s\o\5\d\5\s\d\l\8\a\y\c\v\8\r\q\h\q\h\o\i\d\m\d\a\b\m\c\q\2\g\3\o\y\f\0\o\i\y\i\u\p\d\d\n\p\9\1\z\1\1\y\x\q\i\j\8\l\g\6\s\b\6\4\6\d\s\i\y\f\w\t\t\h\6\r\z\i\0\l\l\s\3\i\e\c\s\h\w\n\m\j\l\s\7\3\7\w\2\4\z\8\6\1\5\y\6\9\l\h\4\3\r\d\1\8\k\x\k\2\r\f\m\e\x\i\u\x\d\p\u\h\j\6\b\1\3\f\r\j\g\5\m\c\4\5\2\j\i\t\5\3\9\o\p\x\y\8\4\z\5\8\w\f\y\1\x\5\o\z\d\u\l\v\x\m\x\e\p\2\c\0\8\h\b\b\y\j\d\t\b\9\k\2\6\l\f\d\l\t\8\e\x\l\k\n\y\p\t\p\k\b\p\u\a\9\k\p\t\l\j\1\5\g\2\1\1\p\j\k\m\q\k\7\b\r\p\n\c\j\o\y\9\7\r\1\f\f\7\l\i\j\o\y\7\a\a\o\n\5\q\k\o\p\c\f\r\n\a\i\7\2\y\c\d\j\9\v\m\w\7\o\9\z\h\l\u\e\f\s\t\b\2\o\n\9\3\g\h\l\6\0\4\6\1\u\o\p\2\j\d\q\w\0\n\0\e\q\3\z\x\u\z\d\i\m\o\z\i\e\0\w\0\p\x\s\a\6\z\q\7\c\d\3\k\r\9\s\v\1\j\j\x\8\5\4\9\0\7\z\6\x\k\d\l\c\p\u\l\f\h\y\k\3\7\g\j\3\l\3\m\v\6\w\i\r\x\f\j\b\9\c\y\c\e\w\d\o\o\y\7\z\b\x\j\k\f\c\1\h\0\q\w\r\5\g\e\7\z\x\7\z\k\d\g\o\n\l\p\q\3\p\f\7\3\3\w\t\i\4\6\5\r\b\h\v\c\q\4\r\0\h\5\m\7\w\q\j\e\0\1\7\8\g\m\n\7\i\w\7\h\5\p\q\7\a\h\0\l\h\8\0\8\s\t\v\q\b\4\7\e\a\6\6\k\5\7\r\k\2\o\o\f\s\h\j\c\v\5\z\b\b\i\s\k\d\e\n\e\b\k\h\u\o\m\8\n\x\c\6\z\h\d\d\w\p\v\b\4\n\2\i\f\e\3\6\c\w\n\i\g\5\d\c\8\9\o\6\m\w\h\3\6\h\c\1\b\z\p\q\6\s\v\z\g\v\t\k\r\3\q\8\m\b\t\s\2\x\b\l\3\6\8\u\m\y\g\b\j\n\o\i\1\r\r\w\f\l\4\r\1\k\h\o\z\z\i\b\v\o\x\c\l\b\p\2\y\k\h\1\6\g\a\l\r\j\q\l\m\y\4\m\y\h\m\i\q\w\e\e\m\g\p\2\1\f\c\7\j\1\o\d\l\3\6\9\7\l\y\g\1\3\q\h\j\r\t\a\d\c\l\k\v\u\o\5\0\e\o\4\s\l\g\h\7\u\o\z\z\5\c\d\j\z\v\3\w\8\c\6\w\o\2\8\c\3\p\z\t\v\e\s\n\d\b\y\f\t\s\f\8\0\8\n\t\l\5\q\d\m\j\p\w\1\a\n\6\7\m\n\3\l\p\n\h\7\m\1\h\c\g\6\o\e\v\c\i\l\s\y\6\h\s\b\u\v\g\6\1\x\2\e\w\w\y\6\h\h\0\f\q\x\u\7\h\h\x\1\l\y\d\r\o\v\e\h\s\i\5\7\x\m\w\v\0\f\7\m\9\x\8\0\8\f\8\1\s\3\0\d\v\i\n\6\9\7\f\t\l\p\1\v\b\r\3\6\3\z\l\e\x\1\3\x\r\u\y\e\6\8\b\t\5\x\j\t\v\3\4\3\6\0\r\x\x\2\1\y\i\6\s\7\6\k\c\1\e\a\t\4\t\d\r\w\k\8\e\x\2\f\t\u\9\z\g\r\g\z\3\3\u\c\o\l\b\h\x\0\d\2\y\l\f\b\0\4\d\z\h\5\o\e\u\3\j\j\w\2\8\r\7\s\x\a\j\l\s\i\0\7\v\9\u\p\k\5\a\h\4\a\3\4\l\7\4\t\i\t\b\8\0\u\v\m\b\4\0\f\z\o\d\7\y\y\l\l\c\d\w\u\v\0\8\s\7\b\j\z\t\2\8\3\1\l\b\e\4\a\r\v\6\j\t\8\f\1\u\x\d\z\6\2\w\c\f\2\w\y\m\q\u\7\l\5\3\r\7\6\v\f\3\k\9\l\u\e\o\b\k\a\z\k\g\l\l\v\e\l\z\n\7\1\y\2\a\8\s\s\d\2\l\1\z\r\s\7\1\m\l\7\5\w\t\c\6\v\q\f\v\7\1\j\9\p\m\v\r\o\p\y\o\x\p\a\e\l\8\w\y\0\p\u\m\7\0\l\u\x\2\9\t\a\k\i\q\7\f\o\9\6\s\r\q\f\7\a\g\6\d\0\r\z\3\t\k\0\a\f\5\2\j\w\q\o\s\v\n\r\q\p\8\8\m\0\y\8\x\b\e\n\y\r\i\y\v\3\8\r\o\z\l\t\n\3\0\3\t\o\e\r\6\1\6\t\h\2\q\0\h\u\w\h\n\4\u\e\l\c\6\2\h\y\3\4\q\v\9\8\d\c\x\c\4\k\u\n\p\1\8\q\2\a\8\n\6\y\6\b\z\m\c\7\t\4\3\1\b\q\f\0\7\m\j\4\3\w\s\h\x\s\j\q\t\p\s\5\d\b\h\e\t\z\9\n\f\7\5\w\a\k\5\2\v\s\q\l\2\l\y\6\t\8\g\1\b\d\n\l\y\g\q\q\u\k\1\3\x\o\1\m\9\b\5\l\k\f\c\p\h\8\v\0\i\n\n\3\4\0\8\j\p\8\9\u\o\h\j\8\w\5\l\v\v\v\d\l\i\4\7\a\4\8\y\h\h\g\j\g\p\7\q\z\8\d\9\v\k\d\m\o\d\y\j\q\j\0\c\a\7\2\q\9\c\1\l\6\k\5\b\e\f\v\d\w\w\1\q\z\u\n\k\p\s\z\j\b\7\0\x\y\x\i\n\k\s\r\o\c\x\5\6\7\4\x\4\o\8\5\j\b\6\i\3\0\r\h\q\t\a\g\v\d\g\h\d\j\d\5\a\1\9\2\k\c\7\f\5\t\h\l\4\y\4\h\k\4\d\l\g\t\s\h\o\t\u\1\5\a\w\m\l\y\s\l\2\d\0\9\k\a\9\f\t\9\f\1\c\0\d\k\n\4\t\n\m\w\j\6\1\e\x\x\4\k\c\x\c\4\q\9\s\8\8\5\0\3\y\1\a\a\x\6\g\1\g\0\x\i\a\c\p\h\9\n\v\i\t\5\k\7\g\6\x\q\k\f\a\a\0\d\x\c\g\j\j\o\c\3\n\y\0\x\s\w\1\v\6\z\u\n\0\d\2\4\u\h\f\x\8\t\7\y\t\m\b\u\g\v\o\l\i\h\5\6\h\t\0\o\j\7\u\m\a\6\o\q\t\p\o\y\x\x\0\e\e\8\q\n\o\b\h\r\h\j\v\n\t\u\a\j\q\t\q\g\a\l\d\j\c\w\q\o\i\t\r\a\i\j\8\b\y\v\k\3\4\b\j\4\k\d\w\i\z\4\p\r\2\e\h\t\g\r\d\5\j\w\t\m\6\t\p\l\1\z\e\i\e\f\c\0\j\0\o\2\r\e\7\d\b\k\i\e\1\0\a\z\4\z\i\7\5\2\y\2\8\4\5\y\d\b\g\v\x\8\9\u\f\j\9\1\1\j\f\e\2\1\6\f\5\n\1\i\x\z\h\h\d\x\a\0\s\k\r\n\w\u\j\o\j\k\4\4\i\y\w\c\t\x\5\8\l\l\f\8\8\h\e\4\3\s\x\5\4\k\d\y\a\o\e\u\q\c\t\8\9\9\g\0\h\q\7\5\w\m\6\s\3\z\f\u\c\s\9\y\v\f\d\v\g\2\v\y\8\z\j\5\q\u\1\5\e\s\s\s\6\p\i\y\y\l\p\2\f\t\z\o\a\w\q\8\z\f\z\0\i\o\c\x\7\1\x\a\k\1\l\1\0\j\0\z\4\7\4\v\3\m\s\r\5\7\9\w\3\q\j\c\m\u\w\4\i\j\n\w\f\3 ]] 00:06:05.354 00:06:05.354 real 0m1.271s 00:06:05.354 user 0m0.862s 00:06:05.354 sys 0m0.601s 00:06:05.354 08:18:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:05.354 08:18:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:05.354 08:18:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:05.354 08:18:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:05.354 08:18:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:05.354 08:18:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:05.354 08:18:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:05.354 08:18:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:05.354 08:18:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:05.354 08:18:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:05.354 08:18:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:05.354 08:18:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:05.354 08:18:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:05.354 [2024-11-20 08:18:52.792817] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:05.354 [2024-11-20 08:18:52.793136] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59814 ] 00:06:05.354 { 00:06:05.354 "subsystems": [ 00:06:05.354 { 00:06:05.354 "subsystem": "bdev", 00:06:05.354 "config": [ 00:06:05.354 { 00:06:05.354 "params": { 00:06:05.354 "trtype": "pcie", 00:06:05.354 "traddr": "0000:00:10.0", 00:06:05.354 "name": "Nvme0" 00:06:05.354 }, 00:06:05.354 "method": "bdev_nvme_attach_controller" 00:06:05.354 }, 00:06:05.354 { 00:06:05.354 "method": "bdev_wait_for_examine" 00:06:05.354 } 00:06:05.354 ] 00:06:05.354 } 00:06:05.354 ] 00:06:05.354 } 00:06:05.613 [2024-11-20 08:18:52.943374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.613 [2024-11-20 08:18:53.014049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.613 [2024-11-20 08:18:53.073662] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.870  [2024-11-20T08:18:53.431Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:05.870 00:06:05.870 08:18:53 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:05.870 ************************************ 00:06:05.870 END TEST spdk_dd_basic_rw 00:06:05.870 ************************************ 00:06:05.870 00:06:05.870 real 0m17.545s 00:06:05.870 user 0m12.400s 00:06:05.870 sys 0m6.864s 00:06:05.871 08:18:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:05.871 08:18:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:06.129 08:18:53 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:06.129 08:18:53 spdk_dd -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:06.129 08:18:53 spdk_dd -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:06.129 08:18:53 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:06.129 ************************************ 00:06:06.129 START TEST spdk_dd_posix 00:06:06.129 ************************************ 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:06.129 * Looking for test storage... 00:06:06.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1638 -- # lcov --version 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:06:06.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.129 --rc genhtml_branch_coverage=1 00:06:06.129 --rc genhtml_function_coverage=1 00:06:06.129 --rc genhtml_legend=1 00:06:06.129 --rc geninfo_all_blocks=1 00:06:06.129 --rc geninfo_unexecuted_blocks=1 00:06:06.129 00:06:06.129 ' 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:06:06.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.129 --rc genhtml_branch_coverage=1 00:06:06.129 --rc genhtml_function_coverage=1 00:06:06.129 --rc genhtml_legend=1 00:06:06.129 --rc geninfo_all_blocks=1 00:06:06.129 --rc geninfo_unexecuted_blocks=1 00:06:06.129 00:06:06.129 ' 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:06:06.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.129 --rc genhtml_branch_coverage=1 00:06:06.129 --rc genhtml_function_coverage=1 00:06:06.129 --rc genhtml_legend=1 00:06:06.129 --rc geninfo_all_blocks=1 00:06:06.129 --rc geninfo_unexecuted_blocks=1 00:06:06.129 00:06:06.129 ' 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:06:06.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.129 --rc genhtml_branch_coverage=1 00:06:06.129 --rc genhtml_function_coverage=1 00:06:06.129 --rc genhtml_legend=1 00:06:06.129 --rc geninfo_all_blocks=1 00:06:06.129 --rc geninfo_unexecuted_blocks=1 00:06:06.129 00:06:06.129 ' 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:06.129 * First test run, liburing in use 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:06.129 08:18:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:06.387 ************************************ 00:06:06.387 START TEST dd_flag_append 00:06:06.387 ************************************ 00:06:06.387 08:18:53 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1132 -- # append 00:06:06.387 08:18:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:06.387 08:18:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:06.387 08:18:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:06.387 08:18:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:06.387 08:18:53 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:06.387 08:18:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=iq6zdh058agiec8ux1dzr79f0u9akhsv 00:06:06.387 08:18:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:06.387 08:18:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:06.387 08:18:53 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:06.387 08:18:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=o8ab28qp6ek0sr72ffkw23d73p0jojz6 00:06:06.387 08:18:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s iq6zdh058agiec8ux1dzr79f0u9akhsv 00:06:06.387 08:18:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s o8ab28qp6ek0sr72ffkw23d73p0jojz6 00:06:06.387 08:18:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:06.387 [2024-11-20 08:18:53.748726] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:06.388 [2024-11-20 08:18:53.748843] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59892 ] 00:06:06.388 [2024-11-20 08:18:53.893175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.646 [2024-11-20 08:18:53.955048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.646 [2024-11-20 08:18:54.016828] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.646  [2024-11-20T08:18:54.465Z] Copying: 32/32 [B] (average 31 kBps) 00:06:06.904 00:06:06.904 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ o8ab28qp6ek0sr72ffkw23d73p0jojz6iq6zdh058agiec8ux1dzr79f0u9akhsv == \o\8\a\b\2\8\q\p\6\e\k\0\s\r\7\2\f\f\k\w\2\3\d\7\3\p\0\j\o\j\z\6\i\q\6\z\d\h\0\5\8\a\g\i\e\c\8\u\x\1\d\z\r\7\9\f\0\u\9\a\k\h\s\v ]] 00:06:06.904 00:06:06.904 real 0m0.560s 00:06:06.904 user 0m0.299s 00:06:06.904 sys 0m0.288s 00:06:06.904 ************************************ 00:06:06.904 END TEST dd_flag_append 00:06:06.904 ************************************ 00:06:06.904 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:06.904 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:06.904 08:18:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:06.904 08:18:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:06.904 08:18:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:06.904 08:18:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:06.904 ************************************ 00:06:06.904 START TEST dd_flag_directory 00:06:06.904 ************************************ 00:06:06.904 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1132 -- # directory 00:06:06.904 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:06.904 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # local es=0 00:06:06.904 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:06.904 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:06.904 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:06.904 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:06.904 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:06.904 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:06.904 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:06.904 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:06.904 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:06.904 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:06.904 [2024-11-20 08:18:54.391712] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:06.904 [2024-11-20 08:18:54.391931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59927 ] 00:06:07.163 [2024-11-20 08:18:54.541850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.163 [2024-11-20 08:18:54.599890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.163 [2024-11-20 08:18:54.656275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.163 [2024-11-20 08:18:54.693564] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:07.163 [2024-11-20 08:18:54.693643] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:07.163 [2024-11-20 08:18:54.693679] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.422 [2024-11-20 08:18:54.817779] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:07.422 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@658 -- # es=236 00:06:07.422 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:06:07.422 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@667 -- # es=108 00:06:07.422 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # case "$es" in 00:06:07.422 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # es=1 00:06:07.422 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:06:07.422 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:07.422 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # local es=0 00:06:07.422 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:07.422 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.422 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:07.422 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.422 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:07.422 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.422 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:07.422 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.422 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:07.422 08:18:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:07.422 [2024-11-20 08:18:54.960080] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:07.422 [2024-11-20 08:18:54.960343] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59932 ] 00:06:07.681 [2024-11-20 08:18:55.103538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.681 [2024-11-20 08:18:55.156407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.681 [2024-11-20 08:18:55.211751] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.940 [2024-11-20 08:18:55.246513] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:07.940 [2024-11-20 08:18:55.246564] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:07.940 [2024-11-20 08:18:55.246598] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.940 [2024-11-20 08:18:55.365165] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:07.940 ************************************ 00:06:07.940 END TEST dd_flag_directory 00:06:07.940 ************************************ 00:06:07.940 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@658 -- # es=236 00:06:07.940 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:06:07.940 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@667 -- # es=108 00:06:07.940 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # case "$es" in 00:06:07.940 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # es=1 00:06:07.940 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:06:07.940 00:06:07.940 real 0m1.136s 00:06:07.940 user 0m0.636s 00:06:07.940 sys 0m0.284s 00:06:07.940 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:07.940 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:07.940 08:18:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:07.940 08:18:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:07.940 08:18:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:07.940 08:18:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:07.940 ************************************ 00:06:07.940 START TEST dd_flag_nofollow 00:06:07.940 ************************************ 00:06:07.940 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1132 -- # nofollow 00:06:07.940 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:07.940 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:07.940 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:07.940 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:07.940 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:07.940 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # local es=0 00:06:07.940 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:07.940 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.199 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:08.199 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.199 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:08.199 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.199 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:08.199 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.199 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:08.199 08:18:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:08.199 [2024-11-20 08:18:55.557835] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:08.199 [2024-11-20 08:18:55.557927] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59966 ] 00:06:08.199 [2024-11-20 08:18:55.706670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.458 [2024-11-20 08:18:55.765427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.458 [2024-11-20 08:18:55.822351] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.458 [2024-11-20 08:18:55.860654] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:08.458 [2024-11-20 08:18:55.861053] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:08.458 [2024-11-20 08:18:55.861082] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:08.458 [2024-11-20 08:18:55.980415] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:08.717 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@658 -- # es=216 00:06:08.717 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:06:08.717 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@667 -- # es=88 00:06:08.717 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # case "$es" in 00:06:08.717 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # es=1 00:06:08.717 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:06:08.717 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:08.717 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # local es=0 00:06:08.717 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:08.717 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.717 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:08.717 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.717 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:08.717 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.717 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:08.717 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.717 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:08.717 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:08.717 [2024-11-20 08:18:56.117199] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:08.717 [2024-11-20 08:18:56.117291] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59976 ] 00:06:08.717 [2024-11-20 08:18:56.264295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.976 [2024-11-20 08:18:56.312965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.976 [2024-11-20 08:18:56.368227] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.976 [2024-11-20 08:18:56.408427] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:08.976 [2024-11-20 08:18:56.408484] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:08.976 [2024-11-20 08:18:56.408505] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:08.976 [2024-11-20 08:18:56.531937] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:09.235 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@658 -- # es=216 00:06:09.235 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:06:09.235 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@667 -- # es=88 00:06:09.235 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # case "$es" in 00:06:09.235 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # es=1 00:06:09.235 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:06:09.235 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:09.235 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:09.235 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:09.235 08:18:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:09.235 [2024-11-20 08:18:56.679765] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:09.235 [2024-11-20 08:18:56.679934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59983 ] 00:06:09.494 [2024-11-20 08:18:56.827767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.495 [2024-11-20 08:18:56.878321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.495 [2024-11-20 08:18:56.933440] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.495  [2024-11-20T08:18:57.314Z] Copying: 512/512 [B] (average 500 kBps) 00:06:09.753 00:06:09.753 08:18:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ xg9pek9o0skj75jzyunlcyodse5mfzs7qv4vm3nufqvukmwn9aiyotk5yeqirrql48n9nxnpypxo965irxg88drlpanuj9skujl2joe3jykqpj56ck7klyq796xg1g770bpemznzowk7xefduqiugjyp4yrs5g1v9w118gzbi1cxsb8exlopnhf5i1xrp66dlzwqliogf2d7dq3ts5pcxm85q9ikczojyh2ygo7blnpt2425kapnaau4yhc4f6eqgs0ea26e12nkso1cdhcxd9aa354b12qtn826nedxbawl8r2muja3hzuesv7s3kp36st6bb4xqnu9evz6y7fiao3k4o1wjyetafkxxpa4vla08b3le8kwjep1x2pjazxglp8egtpv1tois4gs35rjh2e1rpjx26zhvyn4f4m4s3vqok2mcvvsunyv9r31c0s58iodsn34yw2ssa2cilqhl49ay94f7h8frgetsgogg05btfuiwpg7m5swiqv228fj == \x\g\9\p\e\k\9\o\0\s\k\j\7\5\j\z\y\u\n\l\c\y\o\d\s\e\5\m\f\z\s\7\q\v\4\v\m\3\n\u\f\q\v\u\k\m\w\n\9\a\i\y\o\t\k\5\y\e\q\i\r\r\q\l\4\8\n\9\n\x\n\p\y\p\x\o\9\6\5\i\r\x\g\8\8\d\r\l\p\a\n\u\j\9\s\k\u\j\l\2\j\o\e\3\j\y\k\q\p\j\5\6\c\k\7\k\l\y\q\7\9\6\x\g\1\g\7\7\0\b\p\e\m\z\n\z\o\w\k\7\x\e\f\d\u\q\i\u\g\j\y\p\4\y\r\s\5\g\1\v\9\w\1\1\8\g\z\b\i\1\c\x\s\b\8\e\x\l\o\p\n\h\f\5\i\1\x\r\p\6\6\d\l\z\w\q\l\i\o\g\f\2\d\7\d\q\3\t\s\5\p\c\x\m\8\5\q\9\i\k\c\z\o\j\y\h\2\y\g\o\7\b\l\n\p\t\2\4\2\5\k\a\p\n\a\a\u\4\y\h\c\4\f\6\e\q\g\s\0\e\a\2\6\e\1\2\n\k\s\o\1\c\d\h\c\x\d\9\a\a\3\5\4\b\1\2\q\t\n\8\2\6\n\e\d\x\b\a\w\l\8\r\2\m\u\j\a\3\h\z\u\e\s\v\7\s\3\k\p\3\6\s\t\6\b\b\4\x\q\n\u\9\e\v\z\6\y\7\f\i\a\o\3\k\4\o\1\w\j\y\e\t\a\f\k\x\x\p\a\4\v\l\a\0\8\b\3\l\e\8\k\w\j\e\p\1\x\2\p\j\a\z\x\g\l\p\8\e\g\t\p\v\1\t\o\i\s\4\g\s\3\5\r\j\h\2\e\1\r\p\j\x\2\6\z\h\v\y\n\4\f\4\m\4\s\3\v\q\o\k\2\m\c\v\v\s\u\n\y\v\9\r\3\1\c\0\s\5\8\i\o\d\s\n\3\4\y\w\2\s\s\a\2\c\i\l\q\h\l\4\9\a\y\9\4\f\7\h\8\f\r\g\e\t\s\g\o\g\g\0\5\b\t\f\u\i\w\p\g\7\m\5\s\w\i\q\v\2\2\8\f\j ]] 00:06:09.753 00:06:09.753 real 0m1.687s 00:06:09.753 user 0m0.935s 00:06:09.753 sys 0m0.574s 00:06:09.753 08:18:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:09.753 ************************************ 00:06:09.753 END TEST dd_flag_nofollow 00:06:09.753 ************************************ 00:06:09.753 08:18:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:09.753 08:18:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:09.753 08:18:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:09.753 08:18:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:09.753 08:18:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:09.753 ************************************ 00:06:09.753 START TEST dd_flag_noatime 00:06:09.753 ************************************ 00:06:09.753 08:18:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1132 -- # noatime 00:06:09.753 08:18:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:09.753 08:18:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:09.753 08:18:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:09.753 08:18:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:09.753 08:18:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:09.753 08:18:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:09.753 08:18:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732090736 00:06:09.753 08:18:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:09.753 08:18:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732090737 00:06:09.753 08:18:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:11.129 08:18:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.129 [2024-11-20 08:18:58.318391] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:11.129 [2024-11-20 08:18:58.318493] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60027 ] 00:06:11.129 [2024-11-20 08:18:58.461012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.129 [2024-11-20 08:18:58.527531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.129 [2024-11-20 08:18:58.586146] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.129  [2024-11-20T08:18:58.948Z] Copying: 512/512 [B] (average 500 kBps) 00:06:11.387 00:06:11.387 08:18:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:11.387 08:18:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732090736 )) 00:06:11.387 08:18:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.387 08:18:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732090737 )) 00:06:11.387 08:18:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.387 [2024-11-20 08:18:58.889730] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:11.387 [2024-11-20 08:18:58.889864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60040 ] 00:06:11.646 [2024-11-20 08:18:59.037704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.646 [2024-11-20 08:18:59.090073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.646 [2024-11-20 08:18:59.150463] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.646  [2024-11-20T08:18:59.466Z] Copying: 512/512 [B] (average 500 kBps) 00:06:11.905 00:06:11.905 08:18:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:11.905 ************************************ 00:06:11.905 END TEST dd_flag_noatime 00:06:11.905 ************************************ 00:06:11.905 08:18:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732090739 )) 00:06:11.905 00:06:11.905 real 0m2.156s 00:06:11.905 user 0m0.629s 00:06:11.905 sys 0m0.591s 00:06:11.905 08:18:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:11.905 08:18:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:11.905 08:18:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:11.905 08:18:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:11.905 08:18:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:11.905 08:18:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:11.905 ************************************ 00:06:11.905 START TEST dd_flags_misc 00:06:11.905 ************************************ 00:06:11.905 08:18:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1132 -- # io 00:06:11.905 08:18:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:11.905 08:18:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:11.905 08:18:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:11.905 08:18:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:11.905 08:18:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:11.905 08:18:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:11.905 08:18:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:11.905 08:18:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:11.905 08:18:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:12.165 [2024-11-20 08:18:59.508827] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:12.165 [2024-11-20 08:18:59.508929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60069 ] 00:06:12.165 [2024-11-20 08:18:59.654747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.165 [2024-11-20 08:18:59.710425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.423 [2024-11-20 08:18:59.766442] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.423  [2024-11-20T08:19:00.242Z] Copying: 512/512 [B] (average 500 kBps) 00:06:12.681 00:06:12.682 08:19:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ntda6eaxk8mybxcpk5bo28vvs7efs9mk5jex9dusfa7kdk8o12hqp2v241guafxay8apjpzhz90u2lorhj4g649cft0zqpr4vvvdu7npsm1fqgvldqssi91f3o37dfw4i23bj7l939iqtyy1vu722y8t3o5fkvhjw0awow9av23kxukoyslgxe03nvxf8v0nl93lzv50cetyt1p3vo47dir2zm6vvclqjujbovh9vuvy6qj4m5k2a4vo4p92xgmmvt6224mwpydo8w2v9i12tkeouels3ji0s15x4e445lztad30qiltid4ik76gnzd3fh84x6nyqv8f77xjvv325ddz5lfn16d7egnunytx4kq1dbp2c3fqf8m8b63mfc34ebujtw95igqnikxri0gaci30j3blw8c5u5i7m1a0nnc0fzveekho952740sqgaaqya7d1n6jvrkzc5slcclvvhlej0w301amtz476mkaw5nupknw4m13usoh6eoaso2g == \n\t\d\a\6\e\a\x\k\8\m\y\b\x\c\p\k\5\b\o\2\8\v\v\s\7\e\f\s\9\m\k\5\j\e\x\9\d\u\s\f\a\7\k\d\k\8\o\1\2\h\q\p\2\v\2\4\1\g\u\a\f\x\a\y\8\a\p\j\p\z\h\z\9\0\u\2\l\o\r\h\j\4\g\6\4\9\c\f\t\0\z\q\p\r\4\v\v\v\d\u\7\n\p\s\m\1\f\q\g\v\l\d\q\s\s\i\9\1\f\3\o\3\7\d\f\w\4\i\2\3\b\j\7\l\9\3\9\i\q\t\y\y\1\v\u\7\2\2\y\8\t\3\o\5\f\k\v\h\j\w\0\a\w\o\w\9\a\v\2\3\k\x\u\k\o\y\s\l\g\x\e\0\3\n\v\x\f\8\v\0\n\l\9\3\l\z\v\5\0\c\e\t\y\t\1\p\3\v\o\4\7\d\i\r\2\z\m\6\v\v\c\l\q\j\u\j\b\o\v\h\9\v\u\v\y\6\q\j\4\m\5\k\2\a\4\v\o\4\p\9\2\x\g\m\m\v\t\6\2\2\4\m\w\p\y\d\o\8\w\2\v\9\i\1\2\t\k\e\o\u\e\l\s\3\j\i\0\s\1\5\x\4\e\4\4\5\l\z\t\a\d\3\0\q\i\l\t\i\d\4\i\k\7\6\g\n\z\d\3\f\h\8\4\x\6\n\y\q\v\8\f\7\7\x\j\v\v\3\2\5\d\d\z\5\l\f\n\1\6\d\7\e\g\n\u\n\y\t\x\4\k\q\1\d\b\p\2\c\3\f\q\f\8\m\8\b\6\3\m\f\c\3\4\e\b\u\j\t\w\9\5\i\g\q\n\i\k\x\r\i\0\g\a\c\i\3\0\j\3\b\l\w\8\c\5\u\5\i\7\m\1\a\0\n\n\c\0\f\z\v\e\e\k\h\o\9\5\2\7\4\0\s\q\g\a\a\q\y\a\7\d\1\n\6\j\v\r\k\z\c\5\s\l\c\c\l\v\v\h\l\e\j\0\w\3\0\1\a\m\t\z\4\7\6\m\k\a\w\5\n\u\p\k\n\w\4\m\1\3\u\s\o\h\6\e\o\a\s\o\2\g ]] 00:06:12.682 08:19:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:12.682 08:19:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:12.682 [2024-11-20 08:19:00.062549] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:12.682 [2024-11-20 08:19:00.062675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60078 ] 00:06:12.682 [2024-11-20 08:19:00.208344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.940 [2024-11-20 08:19:00.273059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.940 [2024-11-20 08:19:00.328930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.940  [2024-11-20T08:19:00.759Z] Copying: 512/512 [B] (average 500 kBps) 00:06:13.198 00:06:13.198 08:19:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ntda6eaxk8mybxcpk5bo28vvs7efs9mk5jex9dusfa7kdk8o12hqp2v241guafxay8apjpzhz90u2lorhj4g649cft0zqpr4vvvdu7npsm1fqgvldqssi91f3o37dfw4i23bj7l939iqtyy1vu722y8t3o5fkvhjw0awow9av23kxukoyslgxe03nvxf8v0nl93lzv50cetyt1p3vo47dir2zm6vvclqjujbovh9vuvy6qj4m5k2a4vo4p92xgmmvt6224mwpydo8w2v9i12tkeouels3ji0s15x4e445lztad30qiltid4ik76gnzd3fh84x6nyqv8f77xjvv325ddz5lfn16d7egnunytx4kq1dbp2c3fqf8m8b63mfc34ebujtw95igqnikxri0gaci30j3blw8c5u5i7m1a0nnc0fzveekho952740sqgaaqya7d1n6jvrkzc5slcclvvhlej0w301amtz476mkaw5nupknw4m13usoh6eoaso2g == \n\t\d\a\6\e\a\x\k\8\m\y\b\x\c\p\k\5\b\o\2\8\v\v\s\7\e\f\s\9\m\k\5\j\e\x\9\d\u\s\f\a\7\k\d\k\8\o\1\2\h\q\p\2\v\2\4\1\g\u\a\f\x\a\y\8\a\p\j\p\z\h\z\9\0\u\2\l\o\r\h\j\4\g\6\4\9\c\f\t\0\z\q\p\r\4\v\v\v\d\u\7\n\p\s\m\1\f\q\g\v\l\d\q\s\s\i\9\1\f\3\o\3\7\d\f\w\4\i\2\3\b\j\7\l\9\3\9\i\q\t\y\y\1\v\u\7\2\2\y\8\t\3\o\5\f\k\v\h\j\w\0\a\w\o\w\9\a\v\2\3\k\x\u\k\o\y\s\l\g\x\e\0\3\n\v\x\f\8\v\0\n\l\9\3\l\z\v\5\0\c\e\t\y\t\1\p\3\v\o\4\7\d\i\r\2\z\m\6\v\v\c\l\q\j\u\j\b\o\v\h\9\v\u\v\y\6\q\j\4\m\5\k\2\a\4\v\o\4\p\9\2\x\g\m\m\v\t\6\2\2\4\m\w\p\y\d\o\8\w\2\v\9\i\1\2\t\k\e\o\u\e\l\s\3\j\i\0\s\1\5\x\4\e\4\4\5\l\z\t\a\d\3\0\q\i\l\t\i\d\4\i\k\7\6\g\n\z\d\3\f\h\8\4\x\6\n\y\q\v\8\f\7\7\x\j\v\v\3\2\5\d\d\z\5\l\f\n\1\6\d\7\e\g\n\u\n\y\t\x\4\k\q\1\d\b\p\2\c\3\f\q\f\8\m\8\b\6\3\m\f\c\3\4\e\b\u\j\t\w\9\5\i\g\q\n\i\k\x\r\i\0\g\a\c\i\3\0\j\3\b\l\w\8\c\5\u\5\i\7\m\1\a\0\n\n\c\0\f\z\v\e\e\k\h\o\9\5\2\7\4\0\s\q\g\a\a\q\y\a\7\d\1\n\6\j\v\r\k\z\c\5\s\l\c\c\l\v\v\h\l\e\j\0\w\3\0\1\a\m\t\z\4\7\6\m\k\a\w\5\n\u\p\k\n\w\4\m\1\3\u\s\o\h\6\e\o\a\s\o\2\g ]] 00:06:13.198 08:19:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:13.198 08:19:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:13.198 [2024-11-20 08:19:00.630665] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:13.198 [2024-11-20 08:19:00.630795] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60089 ] 00:06:13.457 [2024-11-20 08:19:00.777867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.457 [2024-11-20 08:19:00.842422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.457 [2024-11-20 08:19:00.900462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.457  [2024-11-20T08:19:01.276Z] Copying: 512/512 [B] (average 250 kBps) 00:06:13.715 00:06:13.715 08:19:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ntda6eaxk8mybxcpk5bo28vvs7efs9mk5jex9dusfa7kdk8o12hqp2v241guafxay8apjpzhz90u2lorhj4g649cft0zqpr4vvvdu7npsm1fqgvldqssi91f3o37dfw4i23bj7l939iqtyy1vu722y8t3o5fkvhjw0awow9av23kxukoyslgxe03nvxf8v0nl93lzv50cetyt1p3vo47dir2zm6vvclqjujbovh9vuvy6qj4m5k2a4vo4p92xgmmvt6224mwpydo8w2v9i12tkeouels3ji0s15x4e445lztad30qiltid4ik76gnzd3fh84x6nyqv8f77xjvv325ddz5lfn16d7egnunytx4kq1dbp2c3fqf8m8b63mfc34ebujtw95igqnikxri0gaci30j3blw8c5u5i7m1a0nnc0fzveekho952740sqgaaqya7d1n6jvrkzc5slcclvvhlej0w301amtz476mkaw5nupknw4m13usoh6eoaso2g == \n\t\d\a\6\e\a\x\k\8\m\y\b\x\c\p\k\5\b\o\2\8\v\v\s\7\e\f\s\9\m\k\5\j\e\x\9\d\u\s\f\a\7\k\d\k\8\o\1\2\h\q\p\2\v\2\4\1\g\u\a\f\x\a\y\8\a\p\j\p\z\h\z\9\0\u\2\l\o\r\h\j\4\g\6\4\9\c\f\t\0\z\q\p\r\4\v\v\v\d\u\7\n\p\s\m\1\f\q\g\v\l\d\q\s\s\i\9\1\f\3\o\3\7\d\f\w\4\i\2\3\b\j\7\l\9\3\9\i\q\t\y\y\1\v\u\7\2\2\y\8\t\3\o\5\f\k\v\h\j\w\0\a\w\o\w\9\a\v\2\3\k\x\u\k\o\y\s\l\g\x\e\0\3\n\v\x\f\8\v\0\n\l\9\3\l\z\v\5\0\c\e\t\y\t\1\p\3\v\o\4\7\d\i\r\2\z\m\6\v\v\c\l\q\j\u\j\b\o\v\h\9\v\u\v\y\6\q\j\4\m\5\k\2\a\4\v\o\4\p\9\2\x\g\m\m\v\t\6\2\2\4\m\w\p\y\d\o\8\w\2\v\9\i\1\2\t\k\e\o\u\e\l\s\3\j\i\0\s\1\5\x\4\e\4\4\5\l\z\t\a\d\3\0\q\i\l\t\i\d\4\i\k\7\6\g\n\z\d\3\f\h\8\4\x\6\n\y\q\v\8\f\7\7\x\j\v\v\3\2\5\d\d\z\5\l\f\n\1\6\d\7\e\g\n\u\n\y\t\x\4\k\q\1\d\b\p\2\c\3\f\q\f\8\m\8\b\6\3\m\f\c\3\4\e\b\u\j\t\w\9\5\i\g\q\n\i\k\x\r\i\0\g\a\c\i\3\0\j\3\b\l\w\8\c\5\u\5\i\7\m\1\a\0\n\n\c\0\f\z\v\e\e\k\h\o\9\5\2\7\4\0\s\q\g\a\a\q\y\a\7\d\1\n\6\j\v\r\k\z\c\5\s\l\c\c\l\v\v\h\l\e\j\0\w\3\0\1\a\m\t\z\4\7\6\m\k\a\w\5\n\u\p\k\n\w\4\m\1\3\u\s\o\h\6\e\o\a\s\o\2\g ]] 00:06:13.715 08:19:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:13.715 08:19:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:13.715 [2024-11-20 08:19:01.197276] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:13.715 [2024-11-20 08:19:01.197371] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60097 ] 00:06:13.974 [2024-11-20 08:19:01.345221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.974 [2024-11-20 08:19:01.411747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.974 [2024-11-20 08:19:01.468162] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.974  [2024-11-20T08:19:01.793Z] Copying: 512/512 [B] (average 250 kBps) 00:06:14.232 00:06:14.232 08:19:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ntda6eaxk8mybxcpk5bo28vvs7efs9mk5jex9dusfa7kdk8o12hqp2v241guafxay8apjpzhz90u2lorhj4g649cft0zqpr4vvvdu7npsm1fqgvldqssi91f3o37dfw4i23bj7l939iqtyy1vu722y8t3o5fkvhjw0awow9av23kxukoyslgxe03nvxf8v0nl93lzv50cetyt1p3vo47dir2zm6vvclqjujbovh9vuvy6qj4m5k2a4vo4p92xgmmvt6224mwpydo8w2v9i12tkeouels3ji0s15x4e445lztad30qiltid4ik76gnzd3fh84x6nyqv8f77xjvv325ddz5lfn16d7egnunytx4kq1dbp2c3fqf8m8b63mfc34ebujtw95igqnikxri0gaci30j3blw8c5u5i7m1a0nnc0fzveekho952740sqgaaqya7d1n6jvrkzc5slcclvvhlej0w301amtz476mkaw5nupknw4m13usoh6eoaso2g == \n\t\d\a\6\e\a\x\k\8\m\y\b\x\c\p\k\5\b\o\2\8\v\v\s\7\e\f\s\9\m\k\5\j\e\x\9\d\u\s\f\a\7\k\d\k\8\o\1\2\h\q\p\2\v\2\4\1\g\u\a\f\x\a\y\8\a\p\j\p\z\h\z\9\0\u\2\l\o\r\h\j\4\g\6\4\9\c\f\t\0\z\q\p\r\4\v\v\v\d\u\7\n\p\s\m\1\f\q\g\v\l\d\q\s\s\i\9\1\f\3\o\3\7\d\f\w\4\i\2\3\b\j\7\l\9\3\9\i\q\t\y\y\1\v\u\7\2\2\y\8\t\3\o\5\f\k\v\h\j\w\0\a\w\o\w\9\a\v\2\3\k\x\u\k\o\y\s\l\g\x\e\0\3\n\v\x\f\8\v\0\n\l\9\3\l\z\v\5\0\c\e\t\y\t\1\p\3\v\o\4\7\d\i\r\2\z\m\6\v\v\c\l\q\j\u\j\b\o\v\h\9\v\u\v\y\6\q\j\4\m\5\k\2\a\4\v\o\4\p\9\2\x\g\m\m\v\t\6\2\2\4\m\w\p\y\d\o\8\w\2\v\9\i\1\2\t\k\e\o\u\e\l\s\3\j\i\0\s\1\5\x\4\e\4\4\5\l\z\t\a\d\3\0\q\i\l\t\i\d\4\i\k\7\6\g\n\z\d\3\f\h\8\4\x\6\n\y\q\v\8\f\7\7\x\j\v\v\3\2\5\d\d\z\5\l\f\n\1\6\d\7\e\g\n\u\n\y\t\x\4\k\q\1\d\b\p\2\c\3\f\q\f\8\m\8\b\6\3\m\f\c\3\4\e\b\u\j\t\w\9\5\i\g\q\n\i\k\x\r\i\0\g\a\c\i\3\0\j\3\b\l\w\8\c\5\u\5\i\7\m\1\a\0\n\n\c\0\f\z\v\e\e\k\h\o\9\5\2\7\4\0\s\q\g\a\a\q\y\a\7\d\1\n\6\j\v\r\k\z\c\5\s\l\c\c\l\v\v\h\l\e\j\0\w\3\0\1\a\m\t\z\4\7\6\m\k\a\w\5\n\u\p\k\n\w\4\m\1\3\u\s\o\h\6\e\o\a\s\o\2\g ]] 00:06:14.232 08:19:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:14.232 08:19:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:14.232 08:19:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:14.232 08:19:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:14.232 08:19:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:14.232 08:19:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:14.232 [2024-11-20 08:19:01.769357] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:14.232 [2024-11-20 08:19:01.769675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60112 ] 00:06:14.499 [2024-11-20 08:19:01.914419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.499 [2024-11-20 08:19:01.970344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.499 [2024-11-20 08:19:02.026557] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.757  [2024-11-20T08:19:02.318Z] Copying: 512/512 [B] (average 500 kBps) 00:06:14.757 00:06:14.757 08:19:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ f4ep88f2jz8iqh99q8375lggsco0i882ntee67y5vmgaxaonfm8g1q35mxc6iiar6q1ydcmcovomwiezckkvoabw6dpnavusj9nk9bo02me0mtxygp21alhuokx9wfkktehjz4b4cptul9moktczs0riflesfi0241u0bvcz7u68fm8e50xpm294j5u2k62j3yudicq77hxkjddcpj8ro8bsh40wd2d2o2hiat45yei6rqzlsgah5ye433468iwni64064zbggl62ac53h79o0c7nha84c3pyuizwmwmrr2trphcahms94gxynqpb8gdypxedg32q5w0ow5p3sxanc5gu4osqksh0nao1x0fxom7z85ppnxcvk20hll1sqrf5yiromtkt3u4llne2j7go6un3pya8dr479mde6mu3lq0ltmgl5gm3bfyhjalyv5nkuyucoldbpf9wpqimi2yybxcfnbcms744f7pe35109ob5hzkm8widu3nu5a51prb == \f\4\e\p\8\8\f\2\j\z\8\i\q\h\9\9\q\8\3\7\5\l\g\g\s\c\o\0\i\8\8\2\n\t\e\e\6\7\y\5\v\m\g\a\x\a\o\n\f\m\8\g\1\q\3\5\m\x\c\6\i\i\a\r\6\q\1\y\d\c\m\c\o\v\o\m\w\i\e\z\c\k\k\v\o\a\b\w\6\d\p\n\a\v\u\s\j\9\n\k\9\b\o\0\2\m\e\0\m\t\x\y\g\p\2\1\a\l\h\u\o\k\x\9\w\f\k\k\t\e\h\j\z\4\b\4\c\p\t\u\l\9\m\o\k\t\c\z\s\0\r\i\f\l\e\s\f\i\0\2\4\1\u\0\b\v\c\z\7\u\6\8\f\m\8\e\5\0\x\p\m\2\9\4\j\5\u\2\k\6\2\j\3\y\u\d\i\c\q\7\7\h\x\k\j\d\d\c\p\j\8\r\o\8\b\s\h\4\0\w\d\2\d\2\o\2\h\i\a\t\4\5\y\e\i\6\r\q\z\l\s\g\a\h\5\y\e\4\3\3\4\6\8\i\w\n\i\6\4\0\6\4\z\b\g\g\l\6\2\a\c\5\3\h\7\9\o\0\c\7\n\h\a\8\4\c\3\p\y\u\i\z\w\m\w\m\r\r\2\t\r\p\h\c\a\h\m\s\9\4\g\x\y\n\q\p\b\8\g\d\y\p\x\e\d\g\3\2\q\5\w\0\o\w\5\p\3\s\x\a\n\c\5\g\u\4\o\s\q\k\s\h\0\n\a\o\1\x\0\f\x\o\m\7\z\8\5\p\p\n\x\c\v\k\2\0\h\l\l\1\s\q\r\f\5\y\i\r\o\m\t\k\t\3\u\4\l\l\n\e\2\j\7\g\o\6\u\n\3\p\y\a\8\d\r\4\7\9\m\d\e\6\m\u\3\l\q\0\l\t\m\g\l\5\g\m\3\b\f\y\h\j\a\l\y\v\5\n\k\u\y\u\c\o\l\d\b\p\f\9\w\p\q\i\m\i\2\y\y\b\x\c\f\n\b\c\m\s\7\4\4\f\7\p\e\3\5\1\0\9\o\b\5\h\z\k\m\8\w\i\d\u\3\n\u\5\a\5\1\p\r\b ]] 00:06:14.757 08:19:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:14.757 08:19:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:14.757 [2024-11-20 08:19:02.302215] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:14.757 [2024-11-20 08:19:02.302317] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60122 ] 00:06:15.016 [2024-11-20 08:19:02.449202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.016 [2024-11-20 08:19:02.512865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.016 [2024-11-20 08:19:02.569086] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.275  [2024-11-20T08:19:02.836Z] Copying: 512/512 [B] (average 500 kBps) 00:06:15.275 00:06:15.275 08:19:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ f4ep88f2jz8iqh99q8375lggsco0i882ntee67y5vmgaxaonfm8g1q35mxc6iiar6q1ydcmcovomwiezckkvoabw6dpnavusj9nk9bo02me0mtxygp21alhuokx9wfkktehjz4b4cptul9moktczs0riflesfi0241u0bvcz7u68fm8e50xpm294j5u2k62j3yudicq77hxkjddcpj8ro8bsh40wd2d2o2hiat45yei6rqzlsgah5ye433468iwni64064zbggl62ac53h79o0c7nha84c3pyuizwmwmrr2trphcahms94gxynqpb8gdypxedg32q5w0ow5p3sxanc5gu4osqksh0nao1x0fxom7z85ppnxcvk20hll1sqrf5yiromtkt3u4llne2j7go6un3pya8dr479mde6mu3lq0ltmgl5gm3bfyhjalyv5nkuyucoldbpf9wpqimi2yybxcfnbcms744f7pe35109ob5hzkm8widu3nu5a51prb == \f\4\e\p\8\8\f\2\j\z\8\i\q\h\9\9\q\8\3\7\5\l\g\g\s\c\o\0\i\8\8\2\n\t\e\e\6\7\y\5\v\m\g\a\x\a\o\n\f\m\8\g\1\q\3\5\m\x\c\6\i\i\a\r\6\q\1\y\d\c\m\c\o\v\o\m\w\i\e\z\c\k\k\v\o\a\b\w\6\d\p\n\a\v\u\s\j\9\n\k\9\b\o\0\2\m\e\0\m\t\x\y\g\p\2\1\a\l\h\u\o\k\x\9\w\f\k\k\t\e\h\j\z\4\b\4\c\p\t\u\l\9\m\o\k\t\c\z\s\0\r\i\f\l\e\s\f\i\0\2\4\1\u\0\b\v\c\z\7\u\6\8\f\m\8\e\5\0\x\p\m\2\9\4\j\5\u\2\k\6\2\j\3\y\u\d\i\c\q\7\7\h\x\k\j\d\d\c\p\j\8\r\o\8\b\s\h\4\0\w\d\2\d\2\o\2\h\i\a\t\4\5\y\e\i\6\r\q\z\l\s\g\a\h\5\y\e\4\3\3\4\6\8\i\w\n\i\6\4\0\6\4\z\b\g\g\l\6\2\a\c\5\3\h\7\9\o\0\c\7\n\h\a\8\4\c\3\p\y\u\i\z\w\m\w\m\r\r\2\t\r\p\h\c\a\h\m\s\9\4\g\x\y\n\q\p\b\8\g\d\y\p\x\e\d\g\3\2\q\5\w\0\o\w\5\p\3\s\x\a\n\c\5\g\u\4\o\s\q\k\s\h\0\n\a\o\1\x\0\f\x\o\m\7\z\8\5\p\p\n\x\c\v\k\2\0\h\l\l\1\s\q\r\f\5\y\i\r\o\m\t\k\t\3\u\4\l\l\n\e\2\j\7\g\o\6\u\n\3\p\y\a\8\d\r\4\7\9\m\d\e\6\m\u\3\l\q\0\l\t\m\g\l\5\g\m\3\b\f\y\h\j\a\l\y\v\5\n\k\u\y\u\c\o\l\d\b\p\f\9\w\p\q\i\m\i\2\y\y\b\x\c\f\n\b\c\m\s\7\4\4\f\7\p\e\3\5\1\0\9\o\b\5\h\z\k\m\8\w\i\d\u\3\n\u\5\a\5\1\p\r\b ]] 00:06:15.275 08:19:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:15.275 08:19:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:15.275 [2024-11-20 08:19:02.830966] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:15.275 [2024-11-20 08:19:02.831047] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60133 ] 00:06:15.534 [2024-11-20 08:19:02.972185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.534 [2024-11-20 08:19:03.032651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.534 [2024-11-20 08:19:03.088192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.793  [2024-11-20T08:19:03.354Z] Copying: 512/512 [B] (average 125 kBps) 00:06:15.793 00:06:15.793 08:19:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ f4ep88f2jz8iqh99q8375lggsco0i882ntee67y5vmgaxaonfm8g1q35mxc6iiar6q1ydcmcovomwiezckkvoabw6dpnavusj9nk9bo02me0mtxygp21alhuokx9wfkktehjz4b4cptul9moktczs0riflesfi0241u0bvcz7u68fm8e50xpm294j5u2k62j3yudicq77hxkjddcpj8ro8bsh40wd2d2o2hiat45yei6rqzlsgah5ye433468iwni64064zbggl62ac53h79o0c7nha84c3pyuizwmwmrr2trphcahms94gxynqpb8gdypxedg32q5w0ow5p3sxanc5gu4osqksh0nao1x0fxom7z85ppnxcvk20hll1sqrf5yiromtkt3u4llne2j7go6un3pya8dr479mde6mu3lq0ltmgl5gm3bfyhjalyv5nkuyucoldbpf9wpqimi2yybxcfnbcms744f7pe35109ob5hzkm8widu3nu5a51prb == \f\4\e\p\8\8\f\2\j\z\8\i\q\h\9\9\q\8\3\7\5\l\g\g\s\c\o\0\i\8\8\2\n\t\e\e\6\7\y\5\v\m\g\a\x\a\o\n\f\m\8\g\1\q\3\5\m\x\c\6\i\i\a\r\6\q\1\y\d\c\m\c\o\v\o\m\w\i\e\z\c\k\k\v\o\a\b\w\6\d\p\n\a\v\u\s\j\9\n\k\9\b\o\0\2\m\e\0\m\t\x\y\g\p\2\1\a\l\h\u\o\k\x\9\w\f\k\k\t\e\h\j\z\4\b\4\c\p\t\u\l\9\m\o\k\t\c\z\s\0\r\i\f\l\e\s\f\i\0\2\4\1\u\0\b\v\c\z\7\u\6\8\f\m\8\e\5\0\x\p\m\2\9\4\j\5\u\2\k\6\2\j\3\y\u\d\i\c\q\7\7\h\x\k\j\d\d\c\p\j\8\r\o\8\b\s\h\4\0\w\d\2\d\2\o\2\h\i\a\t\4\5\y\e\i\6\r\q\z\l\s\g\a\h\5\y\e\4\3\3\4\6\8\i\w\n\i\6\4\0\6\4\z\b\g\g\l\6\2\a\c\5\3\h\7\9\o\0\c\7\n\h\a\8\4\c\3\p\y\u\i\z\w\m\w\m\r\r\2\t\r\p\h\c\a\h\m\s\9\4\g\x\y\n\q\p\b\8\g\d\y\p\x\e\d\g\3\2\q\5\w\0\o\w\5\p\3\s\x\a\n\c\5\g\u\4\o\s\q\k\s\h\0\n\a\o\1\x\0\f\x\o\m\7\z\8\5\p\p\n\x\c\v\k\2\0\h\l\l\1\s\q\r\f\5\y\i\r\o\m\t\k\t\3\u\4\l\l\n\e\2\j\7\g\o\6\u\n\3\p\y\a\8\d\r\4\7\9\m\d\e\6\m\u\3\l\q\0\l\t\m\g\l\5\g\m\3\b\f\y\h\j\a\l\y\v\5\n\k\u\y\u\c\o\l\d\b\p\f\9\w\p\q\i\m\i\2\y\y\b\x\c\f\n\b\c\m\s\7\4\4\f\7\p\e\3\5\1\0\9\o\b\5\h\z\k\m\8\w\i\d\u\3\n\u\5\a\5\1\p\r\b ]] 00:06:15.793 08:19:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:15.793 08:19:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:16.050 [2024-11-20 08:19:03.351031] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:16.050 [2024-11-20 08:19:03.351153] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60143 ] 00:06:16.050 [2024-11-20 08:19:03.496824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.050 [2024-11-20 08:19:03.549763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.050 [2024-11-20 08:19:03.605342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.309  [2024-11-20T08:19:03.870Z] Copying: 512/512 [B] (average 250 kBps) 00:06:16.309 00:06:16.309 08:19:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ f4ep88f2jz8iqh99q8375lggsco0i882ntee67y5vmgaxaonfm8g1q35mxc6iiar6q1ydcmcovomwiezckkvoabw6dpnavusj9nk9bo02me0mtxygp21alhuokx9wfkktehjz4b4cptul9moktczs0riflesfi0241u0bvcz7u68fm8e50xpm294j5u2k62j3yudicq77hxkjddcpj8ro8bsh40wd2d2o2hiat45yei6rqzlsgah5ye433468iwni64064zbggl62ac53h79o0c7nha84c3pyuizwmwmrr2trphcahms94gxynqpb8gdypxedg32q5w0ow5p3sxanc5gu4osqksh0nao1x0fxom7z85ppnxcvk20hll1sqrf5yiromtkt3u4llne2j7go6un3pya8dr479mde6mu3lq0ltmgl5gm3bfyhjalyv5nkuyucoldbpf9wpqimi2yybxcfnbcms744f7pe35109ob5hzkm8widu3nu5a51prb == \f\4\e\p\8\8\f\2\j\z\8\i\q\h\9\9\q\8\3\7\5\l\g\g\s\c\o\0\i\8\8\2\n\t\e\e\6\7\y\5\v\m\g\a\x\a\o\n\f\m\8\g\1\q\3\5\m\x\c\6\i\i\a\r\6\q\1\y\d\c\m\c\o\v\o\m\w\i\e\z\c\k\k\v\o\a\b\w\6\d\p\n\a\v\u\s\j\9\n\k\9\b\o\0\2\m\e\0\m\t\x\y\g\p\2\1\a\l\h\u\o\k\x\9\w\f\k\k\t\e\h\j\z\4\b\4\c\p\t\u\l\9\m\o\k\t\c\z\s\0\r\i\f\l\e\s\f\i\0\2\4\1\u\0\b\v\c\z\7\u\6\8\f\m\8\e\5\0\x\p\m\2\9\4\j\5\u\2\k\6\2\j\3\y\u\d\i\c\q\7\7\h\x\k\j\d\d\c\p\j\8\r\o\8\b\s\h\4\0\w\d\2\d\2\o\2\h\i\a\t\4\5\y\e\i\6\r\q\z\l\s\g\a\h\5\y\e\4\3\3\4\6\8\i\w\n\i\6\4\0\6\4\z\b\g\g\l\6\2\a\c\5\3\h\7\9\o\0\c\7\n\h\a\8\4\c\3\p\y\u\i\z\w\m\w\m\r\r\2\t\r\p\h\c\a\h\m\s\9\4\g\x\y\n\q\p\b\8\g\d\y\p\x\e\d\g\3\2\q\5\w\0\o\w\5\p\3\s\x\a\n\c\5\g\u\4\o\s\q\k\s\h\0\n\a\o\1\x\0\f\x\o\m\7\z\8\5\p\p\n\x\c\v\k\2\0\h\l\l\1\s\q\r\f\5\y\i\r\o\m\t\k\t\3\u\4\l\l\n\e\2\j\7\g\o\6\u\n\3\p\y\a\8\d\r\4\7\9\m\d\e\6\m\u\3\l\q\0\l\t\m\g\l\5\g\m\3\b\f\y\h\j\a\l\y\v\5\n\k\u\y\u\c\o\l\d\b\p\f\9\w\p\q\i\m\i\2\y\y\b\x\c\f\n\b\c\m\s\7\4\4\f\7\p\e\3\5\1\0\9\o\b\5\h\z\k\m\8\w\i\d\u\3\n\u\5\a\5\1\p\r\b ]] 00:06:16.309 00:06:16.309 real 0m4.398s 00:06:16.309 user 0m2.408s 00:06:16.309 sys 0m2.234s 00:06:16.309 08:19:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:16.309 ************************************ 00:06:16.309 END TEST dd_flags_misc 00:06:16.309 ************************************ 00:06:16.309 08:19:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:16.568 08:19:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:16.568 08:19:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:16.568 * Second test run, disabling liburing, forcing AIO 00:06:16.568 08:19:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:16.568 08:19:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:16.568 08:19:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:16.568 08:19:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:16.568 08:19:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:16.568 ************************************ 00:06:16.568 START TEST dd_flag_append_forced_aio 00:06:16.568 ************************************ 00:06:16.568 08:19:03 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1132 -- # append 00:06:16.568 08:19:03 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:16.568 08:19:03 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:16.568 08:19:03 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:16.568 08:19:03 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:16.568 08:19:03 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:16.568 08:19:03 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=pszkovuj81chs9coa2gzpsh7fsexgifd 00:06:16.568 08:19:03 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:16.568 08:19:03 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:16.568 08:19:03 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:16.568 08:19:03 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=drxgrkhrup6ajydqenso3qrah3zz3fqd 00:06:16.568 08:19:03 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s pszkovuj81chs9coa2gzpsh7fsexgifd 00:06:16.568 08:19:03 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s drxgrkhrup6ajydqenso3qrah3zz3fqd 00:06:16.568 08:19:03 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:16.568 [2024-11-20 08:19:03.957236] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:16.568 [2024-11-20 08:19:03.957346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60171 ] 00:06:16.568 [2024-11-20 08:19:04.098059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.828 [2024-11-20 08:19:04.157024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.828 [2024-11-20 08:19:04.214624] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.828  [2024-11-20T08:19:04.648Z] Copying: 32/32 [B] (average 31 kBps) 00:06:17.087 00:06:17.087 08:19:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ drxgrkhrup6ajydqenso3qrah3zz3fqdpszkovuj81chs9coa2gzpsh7fsexgifd == \d\r\x\g\r\k\h\r\u\p\6\a\j\y\d\q\e\n\s\o\3\q\r\a\h\3\z\z\3\f\q\d\p\s\z\k\o\v\u\j\8\1\c\h\s\9\c\o\a\2\g\z\p\s\h\7\f\s\e\x\g\i\f\d ]] 00:06:17.087 00:06:17.087 real 0m0.572s 00:06:17.087 user 0m0.307s 00:06:17.087 sys 0m0.144s 00:06:17.087 08:19:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:17.087 ************************************ 00:06:17.087 END TEST dd_flag_append_forced_aio 00:06:17.087 ************************************ 00:06:17.087 08:19:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:17.087 08:19:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:17.087 08:19:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:17.087 08:19:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:17.087 08:19:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:17.087 ************************************ 00:06:17.087 START TEST dd_flag_directory_forced_aio 00:06:17.087 ************************************ 00:06:17.087 08:19:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1132 -- # directory 00:06:17.087 08:19:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:17.087 08:19:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # local es=0 00:06:17.087 08:19:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:17.087 08:19:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.087 08:19:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:17.087 08:19:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.087 08:19:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:17.087 08:19:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.087 08:19:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:17.088 08:19:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.088 08:19:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:17.088 08:19:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:17.088 [2024-11-20 08:19:04.587425] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:17.088 [2024-11-20 08:19:04.587530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60198 ] 00:06:17.346 [2024-11-20 08:19:04.734941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.346 [2024-11-20 08:19:04.798017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.346 [2024-11-20 08:19:04.854146] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.346 [2024-11-20 08:19:04.890795] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:17.346 [2024-11-20 08:19:04.890882] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:17.346 [2024-11-20 08:19:04.890918] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.605 [2024-11-20 08:19:05.012185] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:17.605 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@658 -- # es=236 00:06:17.605 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:06:17.605 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@667 -- # es=108 00:06:17.605 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # case "$es" in 00:06:17.605 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # es=1 00:06:17.605 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:06:17.605 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:17.605 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # local es=0 00:06:17.605 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:17.605 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.605 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:17.605 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.605 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:17.606 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.606 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:17.606 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.606 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:17.606 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:17.606 [2024-11-20 08:19:05.149991] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:17.606 [2024-11-20 08:19:05.150117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60207 ] 00:06:17.864 [2024-11-20 08:19:05.294966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.864 [2024-11-20 08:19:05.355688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.864 [2024-11-20 08:19:05.414322] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.122 [2024-11-20 08:19:05.454920] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:18.122 [2024-11-20 08:19:05.454976] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:18.123 [2024-11-20 08:19:05.454996] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:18.123 [2024-11-20 08:19:05.575198] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:18.123 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@658 -- # es=236 00:06:18.123 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:06:18.123 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@667 -- # es=108 00:06:18.123 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # case "$es" in 00:06:18.123 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # es=1 00:06:18.123 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:06:18.123 00:06:18.123 real 0m1.109s 00:06:18.123 user 0m0.608s 00:06:18.123 sys 0m0.289s 00:06:18.123 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:18.123 ************************************ 00:06:18.123 END TEST dd_flag_directory_forced_aio 00:06:18.123 ************************************ 00:06:18.123 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:18.123 08:19:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:18.123 08:19:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:18.123 08:19:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:18.123 08:19:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:18.381 ************************************ 00:06:18.381 START TEST dd_flag_nofollow_forced_aio 00:06:18.381 ************************************ 00:06:18.381 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1132 -- # nofollow 00:06:18.381 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:18.381 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:18.381 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:18.381 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:18.382 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.382 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # local es=0 00:06:18.382 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.382 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.382 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:18.382 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.382 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:18.382 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.382 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:18.382 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.382 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:18.382 08:19:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.382 [2024-11-20 08:19:05.758044] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:18.382 [2024-11-20 08:19:05.758163] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60236 ] 00:06:18.382 [2024-11-20 08:19:05.900327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.640 [2024-11-20 08:19:05.958265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.640 [2024-11-20 08:19:06.013964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.640 [2024-11-20 08:19:06.052976] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:18.640 [2024-11-20 08:19:06.053030] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:18.640 [2024-11-20 08:19:06.053067] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:18.640 [2024-11-20 08:19:06.166713] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:18.899 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@658 -- # es=216 00:06:18.899 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:06:18.899 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@667 -- # es=88 00:06:18.899 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # case "$es" in 00:06:18.899 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # es=1 00:06:18.899 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:06:18.899 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:18.899 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # local es=0 00:06:18.899 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:18.899 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.899 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:18.899 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.899 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:18.899 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.899 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:18.899 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.899 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:18.899 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:18.899 [2024-11-20 08:19:06.286493] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:18.899 [2024-11-20 08:19:06.286586] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60245 ] 00:06:18.899 [2024-11-20 08:19:06.433203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.158 [2024-11-20 08:19:06.484037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.158 [2024-11-20 08:19:06.541974] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.158 [2024-11-20 08:19:06.575913] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:19.158 [2024-11-20 08:19:06.576225] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:19.158 [2024-11-20 08:19:06.576271] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.158 [2024-11-20 08:19:06.691317] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:19.417 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@658 -- # es=216 00:06:19.417 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:06:19.417 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@667 -- # es=88 00:06:19.417 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # case "$es" in 00:06:19.417 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # es=1 00:06:19.417 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:06:19.417 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:19.417 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:19.417 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:19.417 08:19:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:19.417 [2024-11-20 08:19:06.808978] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:19.417 [2024-11-20 08:19:06.809079] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60253 ] 00:06:19.417 [2024-11-20 08:19:06.956662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.676 [2024-11-20 08:19:07.004406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.676 [2024-11-20 08:19:07.060360] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.676  [2024-11-20T08:19:07.496Z] Copying: 512/512 [B] (average 500 kBps) 00:06:19.935 00:06:19.935 08:19:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ p3jy13rc62vmrf76gamo09uhui2rh4ofjbmr2k6e74s061qk50ocbjyqqpu08vclfsa2j6e86l4z1xzo9t3lxrc6elztmu2hu1lunrx896ga3w0pepzh8qrn3ox771ezkgrd4omgytdxgcfq3tkeqk1z9tnn4o9q0yecxgc4ur91fdp3h0ah4r2s2qp6yo60gq8p46o7bwuujhe0ymsc3f20zd41vn58ywnrga8gktylgn2i5tchr4i90edj8boi1qao4uq7zu9r1bevfcppkgp3y7qal8ik01ti69gwrkvsi0d40g5syddgxdtfwmzifre04rjm2flptyosls9cmlxsrr3kaa8hl44j1ymdh87mjienok19gonca8g1gjmkerlbovf8h6kkze0ykib7cb2m5gg6tp81ld47i2vylmzibmiifb2d6tl779o2icwvylsl6zgjvoquymdwfeusbvkweasv1dalm4jzyf3qds11s78dbw6zqe04rcpl7i0c == \p\3\j\y\1\3\r\c\6\2\v\m\r\f\7\6\g\a\m\o\0\9\u\h\u\i\2\r\h\4\o\f\j\b\m\r\2\k\6\e\7\4\s\0\6\1\q\k\5\0\o\c\b\j\y\q\q\p\u\0\8\v\c\l\f\s\a\2\j\6\e\8\6\l\4\z\1\x\z\o\9\t\3\l\x\r\c\6\e\l\z\t\m\u\2\h\u\1\l\u\n\r\x\8\9\6\g\a\3\w\0\p\e\p\z\h\8\q\r\n\3\o\x\7\7\1\e\z\k\g\r\d\4\o\m\g\y\t\d\x\g\c\f\q\3\t\k\e\q\k\1\z\9\t\n\n\4\o\9\q\0\y\e\c\x\g\c\4\u\r\9\1\f\d\p\3\h\0\a\h\4\r\2\s\2\q\p\6\y\o\6\0\g\q\8\p\4\6\o\7\b\w\u\u\j\h\e\0\y\m\s\c\3\f\2\0\z\d\4\1\v\n\5\8\y\w\n\r\g\a\8\g\k\t\y\l\g\n\2\i\5\t\c\h\r\4\i\9\0\e\d\j\8\b\o\i\1\q\a\o\4\u\q\7\z\u\9\r\1\b\e\v\f\c\p\p\k\g\p\3\y\7\q\a\l\8\i\k\0\1\t\i\6\9\g\w\r\k\v\s\i\0\d\4\0\g\5\s\y\d\d\g\x\d\t\f\w\m\z\i\f\r\e\0\4\r\j\m\2\f\l\p\t\y\o\s\l\s\9\c\m\l\x\s\r\r\3\k\a\a\8\h\l\4\4\j\1\y\m\d\h\8\7\m\j\i\e\n\o\k\1\9\g\o\n\c\a\8\g\1\g\j\m\k\e\r\l\b\o\v\f\8\h\6\k\k\z\e\0\y\k\i\b\7\c\b\2\m\5\g\g\6\t\p\8\1\l\d\4\7\i\2\v\y\l\m\z\i\b\m\i\i\f\b\2\d\6\t\l\7\7\9\o\2\i\c\w\v\y\l\s\l\6\z\g\j\v\o\q\u\y\m\d\w\f\e\u\s\b\v\k\w\e\a\s\v\1\d\a\l\m\4\j\z\y\f\3\q\d\s\1\1\s\7\8\d\b\w\6\z\q\e\0\4\r\c\p\l\7\i\0\c ]] 00:06:19.935 00:06:19.935 real 0m1.614s 00:06:19.935 user 0m0.854s 00:06:19.935 sys 0m0.428s 00:06:19.935 08:19:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:19.935 ************************************ 00:06:19.935 END TEST dd_flag_nofollow_forced_aio 00:06:19.935 ************************************ 00:06:19.935 08:19:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:19.935 08:19:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:19.935 08:19:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:19.935 08:19:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:19.935 08:19:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:19.935 ************************************ 00:06:19.935 START TEST dd_flag_noatime_forced_aio 00:06:19.935 ************************************ 00:06:19.935 08:19:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1132 -- # noatime 00:06:19.935 08:19:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:19.935 08:19:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:19.935 08:19:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:19.935 08:19:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:19.935 08:19:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:19.935 08:19:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:19.935 08:19:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732090747 00:06:19.935 08:19:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:19.935 08:19:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732090747 00:06:19.935 08:19:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:20.869 08:19:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.127 [2024-11-20 08:19:08.444012] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:21.127 [2024-11-20 08:19:08.444526] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60299 ] 00:06:21.127 [2024-11-20 08:19:08.598826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.127 [2024-11-20 08:19:08.656972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.386 [2024-11-20 08:19:08.716035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.386  [2024-11-20T08:19:09.205Z] Copying: 512/512 [B] (average 500 kBps) 00:06:21.644 00:06:21.644 08:19:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:21.644 08:19:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732090747 )) 00:06:21.644 08:19:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.644 08:19:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732090747 )) 00:06:21.644 08:19:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.644 [2024-11-20 08:19:09.029299] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:21.644 [2024-11-20 08:19:09.029395] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60306 ] 00:06:21.644 [2024-11-20 08:19:09.180148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.903 [2024-11-20 08:19:09.239103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.903 [2024-11-20 08:19:09.294285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.903  [2024-11-20T08:19:09.722Z] Copying: 512/512 [B] (average 500 kBps) 00:06:22.161 00:06:22.161 08:19:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:22.161 ************************************ 00:06:22.161 END TEST dd_flag_noatime_forced_aio 00:06:22.161 ************************************ 00:06:22.161 08:19:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732090749 )) 00:06:22.162 00:06:22.162 real 0m2.179s 00:06:22.162 user 0m0.613s 00:06:22.162 sys 0m0.317s 00:06:22.162 08:19:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:22.162 08:19:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:22.162 08:19:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:22.162 08:19:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:22.162 08:19:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:22.162 08:19:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:22.162 ************************************ 00:06:22.162 START TEST dd_flags_misc_forced_aio 00:06:22.162 ************************************ 00:06:22.162 08:19:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1132 -- # io 00:06:22.162 08:19:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:22.162 08:19:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:22.162 08:19:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:22.162 08:19:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:22.162 08:19:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:22.162 08:19:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:22.162 08:19:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:22.162 08:19:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:22.162 08:19:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:22.162 [2024-11-20 08:19:09.659683] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:22.162 [2024-11-20 08:19:09.660058] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60337 ] 00:06:22.420 [2024-11-20 08:19:09.802615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.420 [2024-11-20 08:19:09.866940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.420 [2024-11-20 08:19:09.922900] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.420  [2024-11-20T08:19:10.240Z] Copying: 512/512 [B] (average 500 kBps) 00:06:22.679 00:06:22.679 08:19:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ w1bnspp88ag7ot8f7cqj3s4586318i204bvo8a5nml8tytq33dztybin51v41z1v4phvpgm4u2ethto16qlpyw16lya4ubvbk44ar11ewa7y52vp217mmplgyayxk2c7b0l59vmosz8x187ti3geyevu4buebgwsxasrgn3o3y61lmwrfnijtrd73aoeziu3jguxp2ycuzhek0v7m57zaoq01tdi1gdwt5trb36044sph26yst5j20t90u0zx89jbvc8pnra78j5z5xczkpolv6dukdusm491crzyl146gfpqevm1ztfskzjms15z9oh7dqp3m51zjqx4gociexq4e64a0mkl6mlexkh1bnukns09cc4sxkj4913qm8i1hdr07jll10kgfshexmillrngms10c2i5rjvwf13f5w2nghzpsdr6ut9jk5s9n8gas4ped7uo861nruitkojbd0nbrn7a3u1u8gbqyhketnxrp2mq1o3amu1d8uopt378qms == \w\1\b\n\s\p\p\8\8\a\g\7\o\t\8\f\7\c\q\j\3\s\4\5\8\6\3\1\8\i\2\0\4\b\v\o\8\a\5\n\m\l\8\t\y\t\q\3\3\d\z\t\y\b\i\n\5\1\v\4\1\z\1\v\4\p\h\v\p\g\m\4\u\2\e\t\h\t\o\1\6\q\l\p\y\w\1\6\l\y\a\4\u\b\v\b\k\4\4\a\r\1\1\e\w\a\7\y\5\2\v\p\2\1\7\m\m\p\l\g\y\a\y\x\k\2\c\7\b\0\l\5\9\v\m\o\s\z\8\x\1\8\7\t\i\3\g\e\y\e\v\u\4\b\u\e\b\g\w\s\x\a\s\r\g\n\3\o\3\y\6\1\l\m\w\r\f\n\i\j\t\r\d\7\3\a\o\e\z\i\u\3\j\g\u\x\p\2\y\c\u\z\h\e\k\0\v\7\m\5\7\z\a\o\q\0\1\t\d\i\1\g\d\w\t\5\t\r\b\3\6\0\4\4\s\p\h\2\6\y\s\t\5\j\2\0\t\9\0\u\0\z\x\8\9\j\b\v\c\8\p\n\r\a\7\8\j\5\z\5\x\c\z\k\p\o\l\v\6\d\u\k\d\u\s\m\4\9\1\c\r\z\y\l\1\4\6\g\f\p\q\e\v\m\1\z\t\f\s\k\z\j\m\s\1\5\z\9\o\h\7\d\q\p\3\m\5\1\z\j\q\x\4\g\o\c\i\e\x\q\4\e\6\4\a\0\m\k\l\6\m\l\e\x\k\h\1\b\n\u\k\n\s\0\9\c\c\4\s\x\k\j\4\9\1\3\q\m\8\i\1\h\d\r\0\7\j\l\l\1\0\k\g\f\s\h\e\x\m\i\l\l\r\n\g\m\s\1\0\c\2\i\5\r\j\v\w\f\1\3\f\5\w\2\n\g\h\z\p\s\d\r\6\u\t\9\j\k\5\s\9\n\8\g\a\s\4\p\e\d\7\u\o\8\6\1\n\r\u\i\t\k\o\j\b\d\0\n\b\r\n\7\a\3\u\1\u\8\g\b\q\y\h\k\e\t\n\x\r\p\2\m\q\1\o\3\a\m\u\1\d\8\u\o\p\t\3\7\8\q\m\s ]] 00:06:22.679 08:19:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:22.679 08:19:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:22.938 [2024-11-20 08:19:10.253587] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:22.938 [2024-11-20 08:19:10.253705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60346 ] 00:06:22.938 [2024-11-20 08:19:10.401446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.938 [2024-11-20 08:19:10.463777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.196 [2024-11-20 08:19:10.518532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.196  [2024-11-20T08:19:11.016Z] Copying: 512/512 [B] (average 500 kBps) 00:06:23.455 00:06:23.455 08:19:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ w1bnspp88ag7ot8f7cqj3s4586318i204bvo8a5nml8tytq33dztybin51v41z1v4phvpgm4u2ethto16qlpyw16lya4ubvbk44ar11ewa7y52vp217mmplgyayxk2c7b0l59vmosz8x187ti3geyevu4buebgwsxasrgn3o3y61lmwrfnijtrd73aoeziu3jguxp2ycuzhek0v7m57zaoq01tdi1gdwt5trb36044sph26yst5j20t90u0zx89jbvc8pnra78j5z5xczkpolv6dukdusm491crzyl146gfpqevm1ztfskzjms15z9oh7dqp3m51zjqx4gociexq4e64a0mkl6mlexkh1bnukns09cc4sxkj4913qm8i1hdr07jll10kgfshexmillrngms10c2i5rjvwf13f5w2nghzpsdr6ut9jk5s9n8gas4ped7uo861nruitkojbd0nbrn7a3u1u8gbqyhketnxrp2mq1o3amu1d8uopt378qms == \w\1\b\n\s\p\p\8\8\a\g\7\o\t\8\f\7\c\q\j\3\s\4\5\8\6\3\1\8\i\2\0\4\b\v\o\8\a\5\n\m\l\8\t\y\t\q\3\3\d\z\t\y\b\i\n\5\1\v\4\1\z\1\v\4\p\h\v\p\g\m\4\u\2\e\t\h\t\o\1\6\q\l\p\y\w\1\6\l\y\a\4\u\b\v\b\k\4\4\a\r\1\1\e\w\a\7\y\5\2\v\p\2\1\7\m\m\p\l\g\y\a\y\x\k\2\c\7\b\0\l\5\9\v\m\o\s\z\8\x\1\8\7\t\i\3\g\e\y\e\v\u\4\b\u\e\b\g\w\s\x\a\s\r\g\n\3\o\3\y\6\1\l\m\w\r\f\n\i\j\t\r\d\7\3\a\o\e\z\i\u\3\j\g\u\x\p\2\y\c\u\z\h\e\k\0\v\7\m\5\7\z\a\o\q\0\1\t\d\i\1\g\d\w\t\5\t\r\b\3\6\0\4\4\s\p\h\2\6\y\s\t\5\j\2\0\t\9\0\u\0\z\x\8\9\j\b\v\c\8\p\n\r\a\7\8\j\5\z\5\x\c\z\k\p\o\l\v\6\d\u\k\d\u\s\m\4\9\1\c\r\z\y\l\1\4\6\g\f\p\q\e\v\m\1\z\t\f\s\k\z\j\m\s\1\5\z\9\o\h\7\d\q\p\3\m\5\1\z\j\q\x\4\g\o\c\i\e\x\q\4\e\6\4\a\0\m\k\l\6\m\l\e\x\k\h\1\b\n\u\k\n\s\0\9\c\c\4\s\x\k\j\4\9\1\3\q\m\8\i\1\h\d\r\0\7\j\l\l\1\0\k\g\f\s\h\e\x\m\i\l\l\r\n\g\m\s\1\0\c\2\i\5\r\j\v\w\f\1\3\f\5\w\2\n\g\h\z\p\s\d\r\6\u\t\9\j\k\5\s\9\n\8\g\a\s\4\p\e\d\7\u\o\8\6\1\n\r\u\i\t\k\o\j\b\d\0\n\b\r\n\7\a\3\u\1\u\8\g\b\q\y\h\k\e\t\n\x\r\p\2\m\q\1\o\3\a\m\u\1\d\8\u\o\p\t\3\7\8\q\m\s ]] 00:06:23.455 08:19:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:23.455 08:19:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:23.455 [2024-11-20 08:19:10.824499] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:23.455 [2024-11-20 08:19:10.824889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60354 ] 00:06:23.455 [2024-11-20 08:19:10.972260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.713 [2024-11-20 08:19:11.024722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.713 [2024-11-20 08:19:11.080496] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.713  [2024-11-20T08:19:11.533Z] Copying: 512/512 [B] (average 500 kBps) 00:06:23.972 00:06:23.972 08:19:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ w1bnspp88ag7ot8f7cqj3s4586318i204bvo8a5nml8tytq33dztybin51v41z1v4phvpgm4u2ethto16qlpyw16lya4ubvbk44ar11ewa7y52vp217mmplgyayxk2c7b0l59vmosz8x187ti3geyevu4buebgwsxasrgn3o3y61lmwrfnijtrd73aoeziu3jguxp2ycuzhek0v7m57zaoq01tdi1gdwt5trb36044sph26yst5j20t90u0zx89jbvc8pnra78j5z5xczkpolv6dukdusm491crzyl146gfpqevm1ztfskzjms15z9oh7dqp3m51zjqx4gociexq4e64a0mkl6mlexkh1bnukns09cc4sxkj4913qm8i1hdr07jll10kgfshexmillrngms10c2i5rjvwf13f5w2nghzpsdr6ut9jk5s9n8gas4ped7uo861nruitkojbd0nbrn7a3u1u8gbqyhketnxrp2mq1o3amu1d8uopt378qms == \w\1\b\n\s\p\p\8\8\a\g\7\o\t\8\f\7\c\q\j\3\s\4\5\8\6\3\1\8\i\2\0\4\b\v\o\8\a\5\n\m\l\8\t\y\t\q\3\3\d\z\t\y\b\i\n\5\1\v\4\1\z\1\v\4\p\h\v\p\g\m\4\u\2\e\t\h\t\o\1\6\q\l\p\y\w\1\6\l\y\a\4\u\b\v\b\k\4\4\a\r\1\1\e\w\a\7\y\5\2\v\p\2\1\7\m\m\p\l\g\y\a\y\x\k\2\c\7\b\0\l\5\9\v\m\o\s\z\8\x\1\8\7\t\i\3\g\e\y\e\v\u\4\b\u\e\b\g\w\s\x\a\s\r\g\n\3\o\3\y\6\1\l\m\w\r\f\n\i\j\t\r\d\7\3\a\o\e\z\i\u\3\j\g\u\x\p\2\y\c\u\z\h\e\k\0\v\7\m\5\7\z\a\o\q\0\1\t\d\i\1\g\d\w\t\5\t\r\b\3\6\0\4\4\s\p\h\2\6\y\s\t\5\j\2\0\t\9\0\u\0\z\x\8\9\j\b\v\c\8\p\n\r\a\7\8\j\5\z\5\x\c\z\k\p\o\l\v\6\d\u\k\d\u\s\m\4\9\1\c\r\z\y\l\1\4\6\g\f\p\q\e\v\m\1\z\t\f\s\k\z\j\m\s\1\5\z\9\o\h\7\d\q\p\3\m\5\1\z\j\q\x\4\g\o\c\i\e\x\q\4\e\6\4\a\0\m\k\l\6\m\l\e\x\k\h\1\b\n\u\k\n\s\0\9\c\c\4\s\x\k\j\4\9\1\3\q\m\8\i\1\h\d\r\0\7\j\l\l\1\0\k\g\f\s\h\e\x\m\i\l\l\r\n\g\m\s\1\0\c\2\i\5\r\j\v\w\f\1\3\f\5\w\2\n\g\h\z\p\s\d\r\6\u\t\9\j\k\5\s\9\n\8\g\a\s\4\p\e\d\7\u\o\8\6\1\n\r\u\i\t\k\o\j\b\d\0\n\b\r\n\7\a\3\u\1\u\8\g\b\q\y\h\k\e\t\n\x\r\p\2\m\q\1\o\3\a\m\u\1\d\8\u\o\p\t\3\7\8\q\m\s ]] 00:06:23.972 08:19:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:23.972 08:19:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:23.972 [2024-11-20 08:19:11.391441] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:23.972 [2024-11-20 08:19:11.391840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60361 ] 00:06:24.230 [2024-11-20 08:19:11.542039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.230 [2024-11-20 08:19:11.608725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.230 [2024-11-20 08:19:11.665001] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.231  [2024-11-20T08:19:12.050Z] Copying: 512/512 [B] (average 500 kBps) 00:06:24.489 00:06:24.489 08:19:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ w1bnspp88ag7ot8f7cqj3s4586318i204bvo8a5nml8tytq33dztybin51v41z1v4phvpgm4u2ethto16qlpyw16lya4ubvbk44ar11ewa7y52vp217mmplgyayxk2c7b0l59vmosz8x187ti3geyevu4buebgwsxasrgn3o3y61lmwrfnijtrd73aoeziu3jguxp2ycuzhek0v7m57zaoq01tdi1gdwt5trb36044sph26yst5j20t90u0zx89jbvc8pnra78j5z5xczkpolv6dukdusm491crzyl146gfpqevm1ztfskzjms15z9oh7dqp3m51zjqx4gociexq4e64a0mkl6mlexkh1bnukns09cc4sxkj4913qm8i1hdr07jll10kgfshexmillrngms10c2i5rjvwf13f5w2nghzpsdr6ut9jk5s9n8gas4ped7uo861nruitkojbd0nbrn7a3u1u8gbqyhketnxrp2mq1o3amu1d8uopt378qms == \w\1\b\n\s\p\p\8\8\a\g\7\o\t\8\f\7\c\q\j\3\s\4\5\8\6\3\1\8\i\2\0\4\b\v\o\8\a\5\n\m\l\8\t\y\t\q\3\3\d\z\t\y\b\i\n\5\1\v\4\1\z\1\v\4\p\h\v\p\g\m\4\u\2\e\t\h\t\o\1\6\q\l\p\y\w\1\6\l\y\a\4\u\b\v\b\k\4\4\a\r\1\1\e\w\a\7\y\5\2\v\p\2\1\7\m\m\p\l\g\y\a\y\x\k\2\c\7\b\0\l\5\9\v\m\o\s\z\8\x\1\8\7\t\i\3\g\e\y\e\v\u\4\b\u\e\b\g\w\s\x\a\s\r\g\n\3\o\3\y\6\1\l\m\w\r\f\n\i\j\t\r\d\7\3\a\o\e\z\i\u\3\j\g\u\x\p\2\y\c\u\z\h\e\k\0\v\7\m\5\7\z\a\o\q\0\1\t\d\i\1\g\d\w\t\5\t\r\b\3\6\0\4\4\s\p\h\2\6\y\s\t\5\j\2\0\t\9\0\u\0\z\x\8\9\j\b\v\c\8\p\n\r\a\7\8\j\5\z\5\x\c\z\k\p\o\l\v\6\d\u\k\d\u\s\m\4\9\1\c\r\z\y\l\1\4\6\g\f\p\q\e\v\m\1\z\t\f\s\k\z\j\m\s\1\5\z\9\o\h\7\d\q\p\3\m\5\1\z\j\q\x\4\g\o\c\i\e\x\q\4\e\6\4\a\0\m\k\l\6\m\l\e\x\k\h\1\b\n\u\k\n\s\0\9\c\c\4\s\x\k\j\4\9\1\3\q\m\8\i\1\h\d\r\0\7\j\l\l\1\0\k\g\f\s\h\e\x\m\i\l\l\r\n\g\m\s\1\0\c\2\i\5\r\j\v\w\f\1\3\f\5\w\2\n\g\h\z\p\s\d\r\6\u\t\9\j\k\5\s\9\n\8\g\a\s\4\p\e\d\7\u\o\8\6\1\n\r\u\i\t\k\o\j\b\d\0\n\b\r\n\7\a\3\u\1\u\8\g\b\q\y\h\k\e\t\n\x\r\p\2\m\q\1\o\3\a\m\u\1\d\8\u\o\p\t\3\7\8\q\m\s ]] 00:06:24.489 08:19:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:24.489 08:19:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:24.489 08:19:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:24.489 08:19:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:24.489 08:19:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:24.489 08:19:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:24.489 [2024-11-20 08:19:11.989191] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:24.489 [2024-11-20 08:19:11.989573] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60369 ] 00:06:24.747 [2024-11-20 08:19:12.137882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.747 [2024-11-20 08:19:12.203374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.747 [2024-11-20 08:19:12.261746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.747  [2024-11-20T08:19:12.566Z] Copying: 512/512 [B] (average 500 kBps) 00:06:25.006 00:06:25.006 08:19:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xsjrt1iduujaitr44slai295kt3g2b3v3qggbic6kug1p8la8mr10y05lc7na7ihxsvb2k4oqozf5hnyslwi6o7yn1s846na0bm14cg7lf2madtxnczyuvuutvr8z9n7byaov2290bdoxc5i8qy4ktoh9n8i91tkip66ntg50qc9xcj5n8rq7s0l419fi4d58y3s1sqkduw38sqdlkex5zdnyi8942ruxubc4ydx7ig1g705kba9bbip2pb8abreorqp7r0ni65rtow2668xbjgarve5fij5944hydabv03r73v8006sa7fv5vyc8b67p6vixgvqj2exbjwck2v2xatbhx8ag5j5dewkgxhbk0r1hn5euw9ut9ssfcvym7z5hgu0byrsdlyq64kf8lwmib1m70tn6wlr2suow8v9x1m3x2rhuda3qkib90c7usfou3m6zk8xcxfyf2nhlyfozh2tileekqffd34tljzkv6uhj1tcr6hz89hibapx8a4f == \x\s\j\r\t\1\i\d\u\u\j\a\i\t\r\4\4\s\l\a\i\2\9\5\k\t\3\g\2\b\3\v\3\q\g\g\b\i\c\6\k\u\g\1\p\8\l\a\8\m\r\1\0\y\0\5\l\c\7\n\a\7\i\h\x\s\v\b\2\k\4\o\q\o\z\f\5\h\n\y\s\l\w\i\6\o\7\y\n\1\s\8\4\6\n\a\0\b\m\1\4\c\g\7\l\f\2\m\a\d\t\x\n\c\z\y\u\v\u\u\t\v\r\8\z\9\n\7\b\y\a\o\v\2\2\9\0\b\d\o\x\c\5\i\8\q\y\4\k\t\o\h\9\n\8\i\9\1\t\k\i\p\6\6\n\t\g\5\0\q\c\9\x\c\j\5\n\8\r\q\7\s\0\l\4\1\9\f\i\4\d\5\8\y\3\s\1\s\q\k\d\u\w\3\8\s\q\d\l\k\e\x\5\z\d\n\y\i\8\9\4\2\r\u\x\u\b\c\4\y\d\x\7\i\g\1\g\7\0\5\k\b\a\9\b\b\i\p\2\p\b\8\a\b\r\e\o\r\q\p\7\r\0\n\i\6\5\r\t\o\w\2\6\6\8\x\b\j\g\a\r\v\e\5\f\i\j\5\9\4\4\h\y\d\a\b\v\0\3\r\7\3\v\8\0\0\6\s\a\7\f\v\5\v\y\c\8\b\6\7\p\6\v\i\x\g\v\q\j\2\e\x\b\j\w\c\k\2\v\2\x\a\t\b\h\x\8\a\g\5\j\5\d\e\w\k\g\x\h\b\k\0\r\1\h\n\5\e\u\w\9\u\t\9\s\s\f\c\v\y\m\7\z\5\h\g\u\0\b\y\r\s\d\l\y\q\6\4\k\f\8\l\w\m\i\b\1\m\7\0\t\n\6\w\l\r\2\s\u\o\w\8\v\9\x\1\m\3\x\2\r\h\u\d\a\3\q\k\i\b\9\0\c\7\u\s\f\o\u\3\m\6\z\k\8\x\c\x\f\y\f\2\n\h\l\y\f\o\z\h\2\t\i\l\e\e\k\q\f\f\d\3\4\t\l\j\z\k\v\6\u\h\j\1\t\c\r\6\h\z\8\9\h\i\b\a\p\x\8\a\4\f ]] 00:06:25.006 08:19:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:25.006 08:19:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:25.264 [2024-11-20 08:19:12.586041] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:25.264 [2024-11-20 08:19:12.586136] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60382 ] 00:06:25.264 [2024-11-20 08:19:12.735133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.264 [2024-11-20 08:19:12.796278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.531 [2024-11-20 08:19:12.852708] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.531  [2024-11-20T08:19:13.381Z] Copying: 512/512 [B] (average 500 kBps) 00:06:25.820 00:06:25.820 08:19:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xsjrt1iduujaitr44slai295kt3g2b3v3qggbic6kug1p8la8mr10y05lc7na7ihxsvb2k4oqozf5hnyslwi6o7yn1s846na0bm14cg7lf2madtxnczyuvuutvr8z9n7byaov2290bdoxc5i8qy4ktoh9n8i91tkip66ntg50qc9xcj5n8rq7s0l419fi4d58y3s1sqkduw38sqdlkex5zdnyi8942ruxubc4ydx7ig1g705kba9bbip2pb8abreorqp7r0ni65rtow2668xbjgarve5fij5944hydabv03r73v8006sa7fv5vyc8b67p6vixgvqj2exbjwck2v2xatbhx8ag5j5dewkgxhbk0r1hn5euw9ut9ssfcvym7z5hgu0byrsdlyq64kf8lwmib1m70tn6wlr2suow8v9x1m3x2rhuda3qkib90c7usfou3m6zk8xcxfyf2nhlyfozh2tileekqffd34tljzkv6uhj1tcr6hz89hibapx8a4f == \x\s\j\r\t\1\i\d\u\u\j\a\i\t\r\4\4\s\l\a\i\2\9\5\k\t\3\g\2\b\3\v\3\q\g\g\b\i\c\6\k\u\g\1\p\8\l\a\8\m\r\1\0\y\0\5\l\c\7\n\a\7\i\h\x\s\v\b\2\k\4\o\q\o\z\f\5\h\n\y\s\l\w\i\6\o\7\y\n\1\s\8\4\6\n\a\0\b\m\1\4\c\g\7\l\f\2\m\a\d\t\x\n\c\z\y\u\v\u\u\t\v\r\8\z\9\n\7\b\y\a\o\v\2\2\9\0\b\d\o\x\c\5\i\8\q\y\4\k\t\o\h\9\n\8\i\9\1\t\k\i\p\6\6\n\t\g\5\0\q\c\9\x\c\j\5\n\8\r\q\7\s\0\l\4\1\9\f\i\4\d\5\8\y\3\s\1\s\q\k\d\u\w\3\8\s\q\d\l\k\e\x\5\z\d\n\y\i\8\9\4\2\r\u\x\u\b\c\4\y\d\x\7\i\g\1\g\7\0\5\k\b\a\9\b\b\i\p\2\p\b\8\a\b\r\e\o\r\q\p\7\r\0\n\i\6\5\r\t\o\w\2\6\6\8\x\b\j\g\a\r\v\e\5\f\i\j\5\9\4\4\h\y\d\a\b\v\0\3\r\7\3\v\8\0\0\6\s\a\7\f\v\5\v\y\c\8\b\6\7\p\6\v\i\x\g\v\q\j\2\e\x\b\j\w\c\k\2\v\2\x\a\t\b\h\x\8\a\g\5\j\5\d\e\w\k\g\x\h\b\k\0\r\1\h\n\5\e\u\w\9\u\t\9\s\s\f\c\v\y\m\7\z\5\h\g\u\0\b\y\r\s\d\l\y\q\6\4\k\f\8\l\w\m\i\b\1\m\7\0\t\n\6\w\l\r\2\s\u\o\w\8\v\9\x\1\m\3\x\2\r\h\u\d\a\3\q\k\i\b\9\0\c\7\u\s\f\o\u\3\m\6\z\k\8\x\c\x\f\y\f\2\n\h\l\y\f\o\z\h\2\t\i\l\e\e\k\q\f\f\d\3\4\t\l\j\z\k\v\6\u\h\j\1\t\c\r\6\h\z\8\9\h\i\b\a\p\x\8\a\4\f ]] 00:06:25.820 08:19:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:25.820 08:19:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:25.820 [2024-11-20 08:19:13.162557] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:25.820 [2024-11-20 08:19:13.162662] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60384 ] 00:06:25.820 [2024-11-20 08:19:13.310547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.820 [2024-11-20 08:19:13.371796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.079 [2024-11-20 08:19:13.428450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.079  [2024-11-20T08:19:13.899Z] Copying: 512/512 [B] (average 500 kBps) 00:06:26.338 00:06:26.338 08:19:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xsjrt1iduujaitr44slai295kt3g2b3v3qggbic6kug1p8la8mr10y05lc7na7ihxsvb2k4oqozf5hnyslwi6o7yn1s846na0bm14cg7lf2madtxnczyuvuutvr8z9n7byaov2290bdoxc5i8qy4ktoh9n8i91tkip66ntg50qc9xcj5n8rq7s0l419fi4d58y3s1sqkduw38sqdlkex5zdnyi8942ruxubc4ydx7ig1g705kba9bbip2pb8abreorqp7r0ni65rtow2668xbjgarve5fij5944hydabv03r73v8006sa7fv5vyc8b67p6vixgvqj2exbjwck2v2xatbhx8ag5j5dewkgxhbk0r1hn5euw9ut9ssfcvym7z5hgu0byrsdlyq64kf8lwmib1m70tn6wlr2suow8v9x1m3x2rhuda3qkib90c7usfou3m6zk8xcxfyf2nhlyfozh2tileekqffd34tljzkv6uhj1tcr6hz89hibapx8a4f == \x\s\j\r\t\1\i\d\u\u\j\a\i\t\r\4\4\s\l\a\i\2\9\5\k\t\3\g\2\b\3\v\3\q\g\g\b\i\c\6\k\u\g\1\p\8\l\a\8\m\r\1\0\y\0\5\l\c\7\n\a\7\i\h\x\s\v\b\2\k\4\o\q\o\z\f\5\h\n\y\s\l\w\i\6\o\7\y\n\1\s\8\4\6\n\a\0\b\m\1\4\c\g\7\l\f\2\m\a\d\t\x\n\c\z\y\u\v\u\u\t\v\r\8\z\9\n\7\b\y\a\o\v\2\2\9\0\b\d\o\x\c\5\i\8\q\y\4\k\t\o\h\9\n\8\i\9\1\t\k\i\p\6\6\n\t\g\5\0\q\c\9\x\c\j\5\n\8\r\q\7\s\0\l\4\1\9\f\i\4\d\5\8\y\3\s\1\s\q\k\d\u\w\3\8\s\q\d\l\k\e\x\5\z\d\n\y\i\8\9\4\2\r\u\x\u\b\c\4\y\d\x\7\i\g\1\g\7\0\5\k\b\a\9\b\b\i\p\2\p\b\8\a\b\r\e\o\r\q\p\7\r\0\n\i\6\5\r\t\o\w\2\6\6\8\x\b\j\g\a\r\v\e\5\f\i\j\5\9\4\4\h\y\d\a\b\v\0\3\r\7\3\v\8\0\0\6\s\a\7\f\v\5\v\y\c\8\b\6\7\p\6\v\i\x\g\v\q\j\2\e\x\b\j\w\c\k\2\v\2\x\a\t\b\h\x\8\a\g\5\j\5\d\e\w\k\g\x\h\b\k\0\r\1\h\n\5\e\u\w\9\u\t\9\s\s\f\c\v\y\m\7\z\5\h\g\u\0\b\y\r\s\d\l\y\q\6\4\k\f\8\l\w\m\i\b\1\m\7\0\t\n\6\w\l\r\2\s\u\o\w\8\v\9\x\1\m\3\x\2\r\h\u\d\a\3\q\k\i\b\9\0\c\7\u\s\f\o\u\3\m\6\z\k\8\x\c\x\f\y\f\2\n\h\l\y\f\o\z\h\2\t\i\l\e\e\k\q\f\f\d\3\4\t\l\j\z\k\v\6\u\h\j\1\t\c\r\6\h\z\8\9\h\i\b\a\p\x\8\a\4\f ]] 00:06:26.338 08:19:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:26.338 08:19:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:26.338 [2024-11-20 08:19:13.753352] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:26.338 [2024-11-20 08:19:13.753473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60397 ] 00:06:26.597 [2024-11-20 08:19:13.903265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.597 [2024-11-20 08:19:13.967527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.597 [2024-11-20 08:19:14.021501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.597  [2024-11-20T08:19:14.417Z] Copying: 512/512 [B] (average 166 kBps) 00:06:26.856 00:06:26.856 08:19:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xsjrt1iduujaitr44slai295kt3g2b3v3qggbic6kug1p8la8mr10y05lc7na7ihxsvb2k4oqozf5hnyslwi6o7yn1s846na0bm14cg7lf2madtxnczyuvuutvr8z9n7byaov2290bdoxc5i8qy4ktoh9n8i91tkip66ntg50qc9xcj5n8rq7s0l419fi4d58y3s1sqkduw38sqdlkex5zdnyi8942ruxubc4ydx7ig1g705kba9bbip2pb8abreorqp7r0ni65rtow2668xbjgarve5fij5944hydabv03r73v8006sa7fv5vyc8b67p6vixgvqj2exbjwck2v2xatbhx8ag5j5dewkgxhbk0r1hn5euw9ut9ssfcvym7z5hgu0byrsdlyq64kf8lwmib1m70tn6wlr2suow8v9x1m3x2rhuda3qkib90c7usfou3m6zk8xcxfyf2nhlyfozh2tileekqffd34tljzkv6uhj1tcr6hz89hibapx8a4f == \x\s\j\r\t\1\i\d\u\u\j\a\i\t\r\4\4\s\l\a\i\2\9\5\k\t\3\g\2\b\3\v\3\q\g\g\b\i\c\6\k\u\g\1\p\8\l\a\8\m\r\1\0\y\0\5\l\c\7\n\a\7\i\h\x\s\v\b\2\k\4\o\q\o\z\f\5\h\n\y\s\l\w\i\6\o\7\y\n\1\s\8\4\6\n\a\0\b\m\1\4\c\g\7\l\f\2\m\a\d\t\x\n\c\z\y\u\v\u\u\t\v\r\8\z\9\n\7\b\y\a\o\v\2\2\9\0\b\d\o\x\c\5\i\8\q\y\4\k\t\o\h\9\n\8\i\9\1\t\k\i\p\6\6\n\t\g\5\0\q\c\9\x\c\j\5\n\8\r\q\7\s\0\l\4\1\9\f\i\4\d\5\8\y\3\s\1\s\q\k\d\u\w\3\8\s\q\d\l\k\e\x\5\z\d\n\y\i\8\9\4\2\r\u\x\u\b\c\4\y\d\x\7\i\g\1\g\7\0\5\k\b\a\9\b\b\i\p\2\p\b\8\a\b\r\e\o\r\q\p\7\r\0\n\i\6\5\r\t\o\w\2\6\6\8\x\b\j\g\a\r\v\e\5\f\i\j\5\9\4\4\h\y\d\a\b\v\0\3\r\7\3\v\8\0\0\6\s\a\7\f\v\5\v\y\c\8\b\6\7\p\6\v\i\x\g\v\q\j\2\e\x\b\j\w\c\k\2\v\2\x\a\t\b\h\x\8\a\g\5\j\5\d\e\w\k\g\x\h\b\k\0\r\1\h\n\5\e\u\w\9\u\t\9\s\s\f\c\v\y\m\7\z\5\h\g\u\0\b\y\r\s\d\l\y\q\6\4\k\f\8\l\w\m\i\b\1\m\7\0\t\n\6\w\l\r\2\s\u\o\w\8\v\9\x\1\m\3\x\2\r\h\u\d\a\3\q\k\i\b\9\0\c\7\u\s\f\o\u\3\m\6\z\k\8\x\c\x\f\y\f\2\n\h\l\y\f\o\z\h\2\t\i\l\e\e\k\q\f\f\d\3\4\t\l\j\z\k\v\6\u\h\j\1\t\c\r\6\h\z\8\9\h\i\b\a\p\x\8\a\4\f ]] 00:06:26.856 00:06:26.856 real 0m4.671s 00:06:26.856 user 0m2.520s 00:06:26.856 sys 0m1.140s 00:06:26.856 08:19:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:26.856 ************************************ 00:06:26.856 END TEST dd_flags_misc_forced_aio 00:06:26.856 ************************************ 00:06:26.856 08:19:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:26.856 08:19:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:26.856 08:19:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:26.856 08:19:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:26.856 ************************************ 00:06:26.856 END TEST spdk_dd_posix 00:06:26.856 ************************************ 00:06:26.856 00:06:26.856 real 0m20.882s 00:06:26.856 user 0m10.124s 00:06:26.856 sys 0m6.722s 00:06:26.856 08:19:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:26.856 08:19:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:26.856 08:19:14 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:26.856 08:19:14 spdk_dd -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:26.856 08:19:14 spdk_dd -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:26.856 08:19:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:26.856 ************************************ 00:06:26.856 START TEST spdk_dd_malloc 00:06:26.857 ************************************ 00:06:26.857 08:19:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:27.116 * Looking for test storage... 00:06:27.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1638 -- # lcov --version 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:06:27.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.116 --rc genhtml_branch_coverage=1 00:06:27.116 --rc genhtml_function_coverage=1 00:06:27.116 --rc genhtml_legend=1 00:06:27.116 --rc geninfo_all_blocks=1 00:06:27.116 --rc geninfo_unexecuted_blocks=1 00:06:27.116 00:06:27.116 ' 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:06:27.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.116 --rc genhtml_branch_coverage=1 00:06:27.116 --rc genhtml_function_coverage=1 00:06:27.116 --rc genhtml_legend=1 00:06:27.116 --rc geninfo_all_blocks=1 00:06:27.116 --rc geninfo_unexecuted_blocks=1 00:06:27.116 00:06:27.116 ' 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:06:27.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.116 --rc genhtml_branch_coverage=1 00:06:27.116 --rc genhtml_function_coverage=1 00:06:27.116 --rc genhtml_legend=1 00:06:27.116 --rc geninfo_all_blocks=1 00:06:27.116 --rc geninfo_unexecuted_blocks=1 00:06:27.116 00:06:27.116 ' 00:06:27.116 08:19:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:06:27.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.117 --rc genhtml_branch_coverage=1 00:06:27.117 --rc genhtml_function_coverage=1 00:06:27.117 --rc genhtml_legend=1 00:06:27.117 --rc geninfo_all_blocks=1 00:06:27.117 --rc geninfo_unexecuted_blocks=1 00:06:27.117 00:06:27.117 ' 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:27.117 ************************************ 00:06:27.117 START TEST dd_malloc_copy 00:06:27.117 ************************************ 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1132 -- # malloc_copy 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:27.117 08:19:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:27.117 [2024-11-20 08:19:14.649957] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:27.117 [2024-11-20 08:19:14.650729] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60485 ] 00:06:27.117 { 00:06:27.117 "subsystems": [ 00:06:27.117 { 00:06:27.117 "subsystem": "bdev", 00:06:27.117 "config": [ 00:06:27.117 { 00:06:27.117 "params": { 00:06:27.117 "block_size": 512, 00:06:27.117 "num_blocks": 1048576, 00:06:27.117 "name": "malloc0" 00:06:27.117 }, 00:06:27.117 "method": "bdev_malloc_create" 00:06:27.117 }, 00:06:27.117 { 00:06:27.117 "params": { 00:06:27.117 "block_size": 512, 00:06:27.117 "num_blocks": 1048576, 00:06:27.117 "name": "malloc1" 00:06:27.117 }, 00:06:27.117 "method": "bdev_malloc_create" 00:06:27.117 }, 00:06:27.117 { 00:06:27.117 "method": "bdev_wait_for_examine" 00:06:27.117 } 00:06:27.117 ] 00:06:27.117 } 00:06:27.117 ] 00:06:27.117 } 00:06:27.376 [2024-11-20 08:19:14.800405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.376 [2024-11-20 08:19:14.865948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.376 [2024-11-20 08:19:14.920269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.751  [2024-11-20T08:19:17.688Z] Copying: 213/512 [MB] (213 MBps) [2024-11-20T08:19:17.947Z] Copying: 420/512 [MB] (206 MBps) [2024-11-20T08:19:18.514Z] Copying: 512/512 [MB] (average 210 MBps) 00:06:30.953 00:06:30.953 08:19:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:30.953 08:19:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:30.953 08:19:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:30.953 08:19:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:30.953 [2024-11-20 08:19:18.342747] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:30.953 [2024-11-20 08:19:18.342905] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60527 ] 00:06:30.953 { 00:06:30.953 "subsystems": [ 00:06:30.953 { 00:06:30.953 "subsystem": "bdev", 00:06:30.953 "config": [ 00:06:30.953 { 00:06:30.953 "params": { 00:06:30.953 "block_size": 512, 00:06:30.953 "num_blocks": 1048576, 00:06:30.953 "name": "malloc0" 00:06:30.953 }, 00:06:30.953 "method": "bdev_malloc_create" 00:06:30.953 }, 00:06:30.953 { 00:06:30.953 "params": { 00:06:30.953 "block_size": 512, 00:06:30.953 "num_blocks": 1048576, 00:06:30.953 "name": "malloc1" 00:06:30.953 }, 00:06:30.953 "method": "bdev_malloc_create" 00:06:30.953 }, 00:06:30.953 { 00:06:30.953 "method": "bdev_wait_for_examine" 00:06:30.953 } 00:06:30.953 ] 00:06:30.953 } 00:06:30.953 ] 00:06:30.953 } 00:06:30.953 [2024-11-20 08:19:18.488779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.212 [2024-11-20 08:19:18.546399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.212 [2024-11-20 08:19:18.606635] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.603  [2024-11-20T08:19:21.099Z] Copying: 207/512 [MB] (207 MBps) [2024-11-20T08:19:21.666Z] Copying: 421/512 [MB] (214 MBps) [2024-11-20T08:19:22.234Z] Copying: 512/512 [MB] (average 212 MBps) 00:06:34.673 00:06:34.673 00:06:34.673 real 0m7.376s 00:06:34.673 user 0m6.351s 00:06:34.673 sys 0m0.861s 00:06:34.673 ************************************ 00:06:34.673 END TEST dd_malloc_copy 00:06:34.673 ************************************ 00:06:34.673 08:19:21 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:34.673 08:19:21 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:34.673 ************************************ 00:06:34.673 END TEST spdk_dd_malloc 00:06:34.673 ************************************ 00:06:34.673 00:06:34.673 real 0m7.634s 00:06:34.673 user 0m6.489s 00:06:34.673 sys 0m0.983s 00:06:34.673 08:19:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:34.673 08:19:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:34.673 08:19:22 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:34.673 08:19:22 spdk_dd -- common/autotest_common.sh@1108 -- # '[' 4 -le 1 ']' 00:06:34.673 08:19:22 spdk_dd -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:34.673 08:19:22 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:34.673 ************************************ 00:06:34.673 START TEST spdk_dd_bdev_to_bdev 00:06:34.673 ************************************ 00:06:34.673 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:34.673 * Looking for test storage... 00:06:34.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:34.673 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:06:34.673 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1638 -- # lcov --version 00:06:34.673 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:06:34.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.934 --rc genhtml_branch_coverage=1 00:06:34.934 --rc genhtml_function_coverage=1 00:06:34.934 --rc genhtml_legend=1 00:06:34.934 --rc geninfo_all_blocks=1 00:06:34.934 --rc geninfo_unexecuted_blocks=1 00:06:34.934 00:06:34.934 ' 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:06:34.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.934 --rc genhtml_branch_coverage=1 00:06:34.934 --rc genhtml_function_coverage=1 00:06:34.934 --rc genhtml_legend=1 00:06:34.934 --rc geninfo_all_blocks=1 00:06:34.934 --rc geninfo_unexecuted_blocks=1 00:06:34.934 00:06:34.934 ' 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:06:34.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.934 --rc genhtml_branch_coverage=1 00:06:34.934 --rc genhtml_function_coverage=1 00:06:34.934 --rc genhtml_legend=1 00:06:34.934 --rc geninfo_all_blocks=1 00:06:34.934 --rc geninfo_unexecuted_blocks=1 00:06:34.934 00:06:34.934 ' 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:06:34.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.934 --rc genhtml_branch_coverage=1 00:06:34.934 --rc genhtml_function_coverage=1 00:06:34.934 --rc genhtml_legend=1 00:06:34.934 --rc geninfo_all_blocks=1 00:06:34.934 --rc geninfo_unexecuted_blocks=1 00:06:34.934 00:06:34.934 ' 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:34.934 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:34.935 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:34.935 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:34.935 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:34.935 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:34.935 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:34.935 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:34.935 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:34.935 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:34.935 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:34.935 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:34.935 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:34.935 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:34.935 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:34.935 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.935 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:34.935 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:34.935 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:34.935 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1108 -- # '[' 7 -le 1 ']' 00:06:34.935 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:34.935 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:34.935 ************************************ 00:06:34.935 START TEST dd_inflate_file 00:06:34.935 ************************************ 00:06:34.935 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:34.935 [2024-11-20 08:19:22.375511] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:34.935 [2024-11-20 08:19:22.375621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60651 ] 00:06:35.193 [2024-11-20 08:19:22.522988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.193 [2024-11-20 08:19:22.582258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.193 [2024-11-20 08:19:22.638781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.193  [2024-11-20T08:19:23.014Z] Copying: 64/64 [MB] (average 1454 MBps) 00:06:35.453 00:06:35.453 00:06:35.453 real 0m0.589s 00:06:35.453 user 0m0.341s 00:06:35.453 sys 0m0.312s 00:06:35.453 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:35.453 ************************************ 00:06:35.453 END TEST dd_inflate_file 00:06:35.453 ************************************ 00:06:35.453 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:35.453 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:35.453 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:35.453 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:35.453 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1108 -- # '[' 6 -le 1 ']' 00:06:35.453 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:35.453 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:35.453 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:35.453 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:35.453 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:35.453 ************************************ 00:06:35.453 START TEST dd_copy_to_out_bdev 00:06:35.453 ************************************ 00:06:35.453 08:19:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:35.711 { 00:06:35.711 "subsystems": [ 00:06:35.711 { 00:06:35.711 "subsystem": "bdev", 00:06:35.711 "config": [ 00:06:35.711 { 00:06:35.711 "params": { 00:06:35.711 "trtype": "pcie", 00:06:35.711 "traddr": "0000:00:10.0", 00:06:35.711 "name": "Nvme0" 00:06:35.711 }, 00:06:35.711 "method": "bdev_nvme_attach_controller" 00:06:35.711 }, 00:06:35.711 { 00:06:35.711 "params": { 00:06:35.711 "trtype": "pcie", 00:06:35.711 "traddr": "0000:00:11.0", 00:06:35.711 "name": "Nvme1" 00:06:35.711 }, 00:06:35.711 "method": "bdev_nvme_attach_controller" 00:06:35.711 }, 00:06:35.711 { 00:06:35.711 "method": "bdev_wait_for_examine" 00:06:35.711 } 00:06:35.711 ] 00:06:35.711 } 00:06:35.711 ] 00:06:35.711 } 00:06:35.711 [2024-11-20 08:19:23.027992] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:35.711 [2024-11-20 08:19:23.028094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60690 ] 00:06:35.711 [2024-11-20 08:19:23.175087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.711 [2024-11-20 08:19:23.230253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.970 [2024-11-20 08:19:23.284420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.906  [2024-11-20T08:19:24.726Z] Copying: 55/64 [MB] (55 MBps) [2024-11-20T08:19:24.985Z] Copying: 64/64 [MB] (average 55 MBps) 00:06:37.424 00:06:37.424 00:06:37.424 real 0m1.884s 00:06:37.424 user 0m1.649s 00:06:37.424 sys 0m1.521s 00:06:37.424 08:19:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:37.424 08:19:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:37.424 ************************************ 00:06:37.424 END TEST dd_copy_to_out_bdev 00:06:37.424 ************************************ 00:06:37.424 08:19:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:37.424 08:19:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:37.424 08:19:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:37.424 08:19:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:37.424 08:19:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:37.424 ************************************ 00:06:37.424 START TEST dd_offset_magic 00:06:37.424 ************************************ 00:06:37.424 08:19:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1132 -- # offset_magic 00:06:37.424 08:19:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:37.424 08:19:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:37.424 08:19:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:37.424 08:19:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:37.424 08:19:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:37.424 08:19:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:37.424 08:19:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:37.424 08:19:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:37.424 [2024-11-20 08:19:24.963142] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:37.424 [2024-11-20 08:19:24.963226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60735 ] 00:06:37.424 { 00:06:37.424 "subsystems": [ 00:06:37.424 { 00:06:37.424 "subsystem": "bdev", 00:06:37.424 "config": [ 00:06:37.424 { 00:06:37.424 "params": { 00:06:37.424 "trtype": "pcie", 00:06:37.424 "traddr": "0000:00:10.0", 00:06:37.424 "name": "Nvme0" 00:06:37.424 }, 00:06:37.424 "method": "bdev_nvme_attach_controller" 00:06:37.424 }, 00:06:37.424 { 00:06:37.424 "params": { 00:06:37.424 "trtype": "pcie", 00:06:37.424 "traddr": "0000:00:11.0", 00:06:37.424 "name": "Nvme1" 00:06:37.424 }, 00:06:37.424 "method": "bdev_nvme_attach_controller" 00:06:37.424 }, 00:06:37.424 { 00:06:37.424 "method": "bdev_wait_for_examine" 00:06:37.424 } 00:06:37.424 ] 00:06:37.424 } 00:06:37.424 ] 00:06:37.424 } 00:06:37.683 [2024-11-20 08:19:25.105181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.683 [2024-11-20 08:19:25.170664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.683 [2024-11-20 08:19:25.227628] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.941  [2024-11-20T08:19:25.761Z] Copying: 65/65 [MB] (average 866 MBps) 00:06:38.200 00:06:38.200 08:19:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:38.200 08:19:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:38.200 08:19:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:38.200 08:19:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:38.462 [2024-11-20 08:19:25.775978] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:38.462 [2024-11-20 08:19:25.776079] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60744 ] 00:06:38.462 { 00:06:38.462 "subsystems": [ 00:06:38.462 { 00:06:38.462 "subsystem": "bdev", 00:06:38.462 "config": [ 00:06:38.462 { 00:06:38.462 "params": { 00:06:38.462 "trtype": "pcie", 00:06:38.462 "traddr": "0000:00:10.0", 00:06:38.462 "name": "Nvme0" 00:06:38.462 }, 00:06:38.462 "method": "bdev_nvme_attach_controller" 00:06:38.462 }, 00:06:38.462 { 00:06:38.462 "params": { 00:06:38.462 "trtype": "pcie", 00:06:38.462 "traddr": "0000:00:11.0", 00:06:38.462 "name": "Nvme1" 00:06:38.462 }, 00:06:38.462 "method": "bdev_nvme_attach_controller" 00:06:38.462 }, 00:06:38.462 { 00:06:38.462 "method": "bdev_wait_for_examine" 00:06:38.462 } 00:06:38.462 ] 00:06:38.462 } 00:06:38.462 ] 00:06:38.462 } 00:06:38.462 [2024-11-20 08:19:25.922888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.462 [2024-11-20 08:19:25.979676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.720 [2024-11-20 08:19:26.036583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.720  [2024-11-20T08:19:26.539Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:38.978 00:06:38.978 08:19:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:38.978 08:19:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:38.978 08:19:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:38.979 08:19:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:38.979 08:19:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:38.979 08:19:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:38.979 08:19:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:38.979 [2024-11-20 08:19:26.480703] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:38.979 [2024-11-20 08:19:26.480852] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60766 ] 00:06:38.979 { 00:06:38.979 "subsystems": [ 00:06:38.979 { 00:06:38.979 "subsystem": "bdev", 00:06:38.979 "config": [ 00:06:38.979 { 00:06:38.979 "params": { 00:06:38.979 "trtype": "pcie", 00:06:38.979 "traddr": "0000:00:10.0", 00:06:38.979 "name": "Nvme0" 00:06:38.979 }, 00:06:38.979 "method": "bdev_nvme_attach_controller" 00:06:38.979 }, 00:06:38.979 { 00:06:38.979 "params": { 00:06:38.979 "trtype": "pcie", 00:06:38.979 "traddr": "0000:00:11.0", 00:06:38.979 "name": "Nvme1" 00:06:38.979 }, 00:06:38.979 "method": "bdev_nvme_attach_controller" 00:06:38.979 }, 00:06:38.979 { 00:06:38.979 "method": "bdev_wait_for_examine" 00:06:38.979 } 00:06:38.979 ] 00:06:38.979 } 00:06:38.979 ] 00:06:38.979 } 00:06:39.236 [2024-11-20 08:19:26.629099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.236 [2024-11-20 08:19:26.689196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.236 [2024-11-20 08:19:26.745634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.494  [2024-11-20T08:19:27.314Z] Copying: 65/65 [MB] (average 984 MBps) 00:06:39.753 00:06:39.753 08:19:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:39.753 08:19:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:39.753 08:19:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:39.753 08:19:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:39.753 [2024-11-20 08:19:27.305858] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:39.753 [2024-11-20 08:19:27.305985] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60786 ] 00:06:39.753 { 00:06:39.753 "subsystems": [ 00:06:39.753 { 00:06:39.753 "subsystem": "bdev", 00:06:39.753 "config": [ 00:06:39.753 { 00:06:39.753 "params": { 00:06:39.753 "trtype": "pcie", 00:06:39.753 "traddr": "0000:00:10.0", 00:06:39.753 "name": "Nvme0" 00:06:39.753 }, 00:06:39.753 "method": "bdev_nvme_attach_controller" 00:06:39.753 }, 00:06:39.753 { 00:06:39.753 "params": { 00:06:39.753 "trtype": "pcie", 00:06:39.753 "traddr": "0000:00:11.0", 00:06:39.753 "name": "Nvme1" 00:06:39.753 }, 00:06:39.753 "method": "bdev_nvme_attach_controller" 00:06:39.753 }, 00:06:39.753 { 00:06:39.753 "method": "bdev_wait_for_examine" 00:06:39.753 } 00:06:39.753 ] 00:06:39.753 } 00:06:39.753 ] 00:06:39.753 } 00:06:40.010 [2024-11-20 08:19:27.453801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.010 [2024-11-20 08:19:27.515310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.268 [2024-11-20 08:19:27.573146] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.268  [2024-11-20T08:19:28.088Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:40.527 00:06:40.527 08:19:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:40.527 08:19:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:40.527 00:06:40.527 real 0m3.039s 00:06:40.527 user 0m2.207s 00:06:40.527 sys 0m0.926s 00:06:40.527 08:19:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:40.527 08:19:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:40.527 ************************************ 00:06:40.527 END TEST dd_offset_magic 00:06:40.527 ************************************ 00:06:40.527 08:19:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:40.527 08:19:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:40.527 08:19:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:40.527 08:19:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:40.527 08:19:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:40.527 08:19:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:40.527 08:19:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:40.527 08:19:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:40.527 08:19:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:40.527 08:19:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:40.527 08:19:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:40.527 [2024-11-20 08:19:28.050364] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:40.527 [2024-11-20 08:19:28.050477] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60822 ] 00:06:40.527 { 00:06:40.527 "subsystems": [ 00:06:40.527 { 00:06:40.527 "subsystem": "bdev", 00:06:40.527 "config": [ 00:06:40.527 { 00:06:40.527 "params": { 00:06:40.527 "trtype": "pcie", 00:06:40.527 "traddr": "0000:00:10.0", 00:06:40.527 "name": "Nvme0" 00:06:40.527 }, 00:06:40.527 "method": "bdev_nvme_attach_controller" 00:06:40.527 }, 00:06:40.527 { 00:06:40.527 "params": { 00:06:40.527 "trtype": "pcie", 00:06:40.527 "traddr": "0000:00:11.0", 00:06:40.527 "name": "Nvme1" 00:06:40.527 }, 00:06:40.527 "method": "bdev_nvme_attach_controller" 00:06:40.527 }, 00:06:40.527 { 00:06:40.527 "method": "bdev_wait_for_examine" 00:06:40.527 } 00:06:40.527 ] 00:06:40.527 } 00:06:40.527 ] 00:06:40.527 } 00:06:40.786 [2024-11-20 08:19:28.198121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.786 [2024-11-20 08:19:28.263117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.786 [2024-11-20 08:19:28.321918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.045  [2024-11-20T08:19:28.865Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:06:41.304 00:06:41.304 08:19:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:41.304 08:19:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:41.304 08:19:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:41.304 08:19:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:41.304 08:19:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:41.304 08:19:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:41.304 08:19:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:41.304 08:19:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:41.304 08:19:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:41.304 08:19:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:41.304 { 00:06:41.304 "subsystems": [ 00:06:41.304 { 00:06:41.304 "subsystem": "bdev", 00:06:41.304 "config": [ 00:06:41.304 { 00:06:41.304 "params": { 00:06:41.304 "trtype": "pcie", 00:06:41.304 "traddr": "0000:00:10.0", 00:06:41.304 "name": "Nvme0" 00:06:41.304 }, 00:06:41.304 "method": "bdev_nvme_attach_controller" 00:06:41.304 }, 00:06:41.304 { 00:06:41.304 "params": { 00:06:41.304 "trtype": "pcie", 00:06:41.304 "traddr": "0000:00:11.0", 00:06:41.304 "name": "Nvme1" 00:06:41.304 }, 00:06:41.304 "method": "bdev_nvme_attach_controller" 00:06:41.304 }, 00:06:41.304 { 00:06:41.304 "method": "bdev_wait_for_examine" 00:06:41.304 } 00:06:41.304 ] 00:06:41.304 } 00:06:41.304 ] 00:06:41.304 } 00:06:41.304 [2024-11-20 08:19:28.764605] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:41.305 [2024-11-20 08:19:28.764739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60833 ] 00:06:41.563 [2024-11-20 08:19:28.913639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.563 [2024-11-20 08:19:28.967424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.563 [2024-11-20 08:19:29.025486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.822  [2024-11-20T08:19:29.642Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:06:42.081 00:06:42.081 08:19:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:42.081 00:06:42.081 real 0m7.386s 00:06:42.081 user 0m5.410s 00:06:42.081 sys 0m3.525s 00:06:42.081 08:19:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:42.081 08:19:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:42.081 ************************************ 00:06:42.081 END TEST spdk_dd_bdev_to_bdev 00:06:42.081 ************************************ 00:06:42.081 08:19:29 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:42.081 08:19:29 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:42.081 08:19:29 spdk_dd -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:42.081 08:19:29 spdk_dd -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:42.081 08:19:29 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:42.081 ************************************ 00:06:42.081 START TEST spdk_dd_uring 00:06:42.081 ************************************ 00:06:42.081 08:19:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:42.081 * Looking for test storage... 00:06:42.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:42.081 08:19:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:06:42.081 08:19:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1638 -- # lcov --version 00:06:42.081 08:19:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:06:42.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.340 --rc genhtml_branch_coverage=1 00:06:42.340 --rc genhtml_function_coverage=1 00:06:42.340 --rc genhtml_legend=1 00:06:42.340 --rc geninfo_all_blocks=1 00:06:42.340 --rc geninfo_unexecuted_blocks=1 00:06:42.340 00:06:42.340 ' 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:06:42.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.340 --rc genhtml_branch_coverage=1 00:06:42.340 --rc genhtml_function_coverage=1 00:06:42.340 --rc genhtml_legend=1 00:06:42.340 --rc geninfo_all_blocks=1 00:06:42.340 --rc geninfo_unexecuted_blocks=1 00:06:42.340 00:06:42.340 ' 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:06:42.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.340 --rc genhtml_branch_coverage=1 00:06:42.340 --rc genhtml_function_coverage=1 00:06:42.340 --rc genhtml_legend=1 00:06:42.340 --rc geninfo_all_blocks=1 00:06:42.340 --rc geninfo_unexecuted_blocks=1 00:06:42.340 00:06:42.340 ' 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:06:42.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.340 --rc genhtml_branch_coverage=1 00:06:42.340 --rc genhtml_function_coverage=1 00:06:42.340 --rc genhtml_legend=1 00:06:42.340 --rc geninfo_all_blocks=1 00:06:42.340 --rc geninfo_unexecuted_blocks=1 00:06:42.340 00:06:42.340 ' 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.340 08:19:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:42.341 ************************************ 00:06:42.341 START TEST dd_uring_copy 00:06:42.341 ************************************ 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1132 -- # uring_zram_copy 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=2zzy60i96j6v9zx0q28xlda99x05t1vwugaw29lo2w1v8t1ksujdn30f80sndab205p5stlceesoknbormv7zwhw8uarxw0m8vkkhg3n01cisd8csahgp9ja6k64crk6yhrjbxexij5y8p6x8etwj41h6z34tai0tk5i7yq6pfozydkr9n5l2ff6rwbij90nybqbysaaor8hb6bnv0ciskfj6kdcsxyjseruyddnkdt7zxvdtiz68ycq4byqqt8wg84jh1dghcu91pzt44ridmlus5eqc8ftsvvdluf63ulvvv9tk2li502hsbbn3tmqyfmhrvjfqp7de5jvbpn7kevixzmg1l0woxgtl7yw0ccqrde3ynyy6y91nqlj758zsx5rzic1e5ay7usv7qvkgaqif16tj6zar4n1lvhq95j17ibt52owqkhb0savhwgfdfqrk98vazkpkl2215tq7qxg4tfxdvzv6f672eht1hru24xiv1ztdsjbpmwxcwj0gozylc2tmbkgoq8bz9b9it5eepow9pu7zj5hsrt5dsue3pf4hs00xxu9t6871m92uvn5dgxlya9l8az0amxzm65o4bzdj7anxke5w0ri7l0m187u1pw9hxit9fjnqn3ab7nwc11fg4kb4qnpeit3qy8mirlu18msn97c3i0yakkzq7ujbzdyyzzzj8uqa3ky1mh80j9nrko5wg63ysxfvcqf0y07kc3uipoatpngzqt64etxwi37sfwqtdui99cz5xdd8hm02gp12edxprj56rijxhcwn7j6giyr9q88sg3tvyggef0p12wtcmw7irvfw8j4kt78mvdsp63mjgrynbiyhci8bcexilpqyqth729y4er8srvbjkzssztxpp88xeks6aytxa2a3rjxcf0eqc7sy97kmi003fp7s4bxkkeqblebldr8yykaw3pn03z6oz4m0g2rr8cug52osxedv49l5e0ahnosfas1uxqrappd16fjurc0apdwmt4gss1q 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 2zzy60i96j6v9zx0q28xlda99x05t1vwugaw29lo2w1v8t1ksujdn30f80sndab205p5stlceesoknbormv7zwhw8uarxw0m8vkkhg3n01cisd8csahgp9ja6k64crk6yhrjbxexij5y8p6x8etwj41h6z34tai0tk5i7yq6pfozydkr9n5l2ff6rwbij90nybqbysaaor8hb6bnv0ciskfj6kdcsxyjseruyddnkdt7zxvdtiz68ycq4byqqt8wg84jh1dghcu91pzt44ridmlus5eqc8ftsvvdluf63ulvvv9tk2li502hsbbn3tmqyfmhrvjfqp7de5jvbpn7kevixzmg1l0woxgtl7yw0ccqrde3ynyy6y91nqlj758zsx5rzic1e5ay7usv7qvkgaqif16tj6zar4n1lvhq95j17ibt52owqkhb0savhwgfdfqrk98vazkpkl2215tq7qxg4tfxdvzv6f672eht1hru24xiv1ztdsjbpmwxcwj0gozylc2tmbkgoq8bz9b9it5eepow9pu7zj5hsrt5dsue3pf4hs00xxu9t6871m92uvn5dgxlya9l8az0amxzm65o4bzdj7anxke5w0ri7l0m187u1pw9hxit9fjnqn3ab7nwc11fg4kb4qnpeit3qy8mirlu18msn97c3i0yakkzq7ujbzdyyzzzj8uqa3ky1mh80j9nrko5wg63ysxfvcqf0y07kc3uipoatpngzqt64etxwi37sfwqtdui99cz5xdd8hm02gp12edxprj56rijxhcwn7j6giyr9q88sg3tvyggef0p12wtcmw7irvfw8j4kt78mvdsp63mjgrynbiyhci8bcexilpqyqth729y4er8srvbjkzssztxpp88xeks6aytxa2a3rjxcf0eqc7sy97kmi003fp7s4bxkkeqblebldr8yykaw3pn03z6oz4m0g2rr8cug52osxedv49l5e0ahnosfas1uxqrappd16fjurc0apdwmt4gss1q 00:06:42.341 08:19:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:42.341 [2024-11-20 08:19:29.845689] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:42.341 [2024-11-20 08:19:29.845840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60917 ] 00:06:42.600 [2024-11-20 08:19:29.992187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.600 [2024-11-20 08:19:30.055931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.600 [2024-11-20 08:19:30.114341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.534  [2024-11-20T08:19:31.353Z] Copying: 511/511 [MB] (average 1292 MBps) 00:06:43.792 00:06:43.792 08:19:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:43.792 08:19:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:43.792 08:19:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:43.792 08:19:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:43.792 { 00:06:43.792 "subsystems": [ 00:06:43.792 { 00:06:43.792 "subsystem": "bdev", 00:06:43.792 "config": [ 00:06:43.792 { 00:06:43.792 "params": { 00:06:43.792 "block_size": 512, 00:06:43.792 "num_blocks": 1048576, 00:06:43.792 "name": "malloc0" 00:06:43.792 }, 00:06:43.792 "method": "bdev_malloc_create" 00:06:43.792 }, 00:06:43.792 { 00:06:43.792 "params": { 00:06:43.792 "filename": "/dev/zram1", 00:06:43.792 "name": "uring0" 00:06:43.792 }, 00:06:43.792 "method": "bdev_uring_create" 00:06:43.792 }, 00:06:43.792 { 00:06:43.792 "method": "bdev_wait_for_examine" 00:06:43.792 } 00:06:43.792 ] 00:06:43.792 } 00:06:43.792 ] 00:06:43.792 } 00:06:43.792 [2024-11-20 08:19:31.178045] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:43.792 [2024-11-20 08:19:31.178147] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60943 ] 00:06:43.792 [2024-11-20 08:19:31.326442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.051 [2024-11-20 08:19:31.382700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.051 [2024-11-20 08:19:31.440433] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.432  [2024-11-20T08:19:33.930Z] Copying: 232/512 [MB] (232 MBps) [2024-11-20T08:19:34.188Z] Copying: 444/512 [MB] (212 MBps) [2024-11-20T08:19:34.754Z] Copying: 512/512 [MB] (average 221 MBps) 00:06:47.193 00:06:47.193 08:19:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:47.193 08:19:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:06:47.193 08:19:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:47.193 08:19:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:47.193 [2024-11-20 08:19:34.585282] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:47.193 [2024-11-20 08:19:34.585426] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60988 ] 00:06:47.193 { 00:06:47.193 "subsystems": [ 00:06:47.193 { 00:06:47.193 "subsystem": "bdev", 00:06:47.193 "config": [ 00:06:47.193 { 00:06:47.193 "params": { 00:06:47.193 "block_size": 512, 00:06:47.193 "num_blocks": 1048576, 00:06:47.193 "name": "malloc0" 00:06:47.193 }, 00:06:47.193 "method": "bdev_malloc_create" 00:06:47.193 }, 00:06:47.193 { 00:06:47.193 "params": { 00:06:47.193 "filename": "/dev/zram1", 00:06:47.193 "name": "uring0" 00:06:47.193 }, 00:06:47.193 "method": "bdev_uring_create" 00:06:47.193 }, 00:06:47.193 { 00:06:47.193 "method": "bdev_wait_for_examine" 00:06:47.193 } 00:06:47.193 ] 00:06:47.193 } 00:06:47.193 ] 00:06:47.193 } 00:06:47.193 [2024-11-20 08:19:34.727285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.451 [2024-11-20 08:19:34.808034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.451 [2024-11-20 08:19:34.883788] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.826  [2024-11-20T08:19:37.322Z] Copying: 164/512 [MB] (164 MBps) [2024-11-20T08:19:38.318Z] Copying: 329/512 [MB] (164 MBps) [2024-11-20T08:19:38.318Z] Copying: 492/512 [MB] (162 MBps) [2024-11-20T08:19:38.886Z] Copying: 512/512 [MB] (average 164 MBps) 00:06:51.325 00:06:51.325 08:19:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:51.325 08:19:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 2zzy60i96j6v9zx0q28xlda99x05t1vwugaw29lo2w1v8t1ksujdn30f80sndab205p5stlceesoknbormv7zwhw8uarxw0m8vkkhg3n01cisd8csahgp9ja6k64crk6yhrjbxexij5y8p6x8etwj41h6z34tai0tk5i7yq6pfozydkr9n5l2ff6rwbij90nybqbysaaor8hb6bnv0ciskfj6kdcsxyjseruyddnkdt7zxvdtiz68ycq4byqqt8wg84jh1dghcu91pzt44ridmlus5eqc8ftsvvdluf63ulvvv9tk2li502hsbbn3tmqyfmhrvjfqp7de5jvbpn7kevixzmg1l0woxgtl7yw0ccqrde3ynyy6y91nqlj758zsx5rzic1e5ay7usv7qvkgaqif16tj6zar4n1lvhq95j17ibt52owqkhb0savhwgfdfqrk98vazkpkl2215tq7qxg4tfxdvzv6f672eht1hru24xiv1ztdsjbpmwxcwj0gozylc2tmbkgoq8bz9b9it5eepow9pu7zj5hsrt5dsue3pf4hs00xxu9t6871m92uvn5dgxlya9l8az0amxzm65o4bzdj7anxke5w0ri7l0m187u1pw9hxit9fjnqn3ab7nwc11fg4kb4qnpeit3qy8mirlu18msn97c3i0yakkzq7ujbzdyyzzzj8uqa3ky1mh80j9nrko5wg63ysxfvcqf0y07kc3uipoatpngzqt64etxwi37sfwqtdui99cz5xdd8hm02gp12edxprj56rijxhcwn7j6giyr9q88sg3tvyggef0p12wtcmw7irvfw8j4kt78mvdsp63mjgrynbiyhci8bcexilpqyqth729y4er8srvbjkzssztxpp88xeks6aytxa2a3rjxcf0eqc7sy97kmi003fp7s4bxkkeqblebldr8yykaw3pn03z6oz4m0g2rr8cug52osxedv49l5e0ahnosfas1uxqrappd16fjurc0apdwmt4gss1q == \2\z\z\y\6\0\i\9\6\j\6\v\9\z\x\0\q\2\8\x\l\d\a\9\9\x\0\5\t\1\v\w\u\g\a\w\2\9\l\o\2\w\1\v\8\t\1\k\s\u\j\d\n\3\0\f\8\0\s\n\d\a\b\2\0\5\p\5\s\t\l\c\e\e\s\o\k\n\b\o\r\m\v\7\z\w\h\w\8\u\a\r\x\w\0\m\8\v\k\k\h\g\3\n\0\1\c\i\s\d\8\c\s\a\h\g\p\9\j\a\6\k\6\4\c\r\k\6\y\h\r\j\b\x\e\x\i\j\5\y\8\p\6\x\8\e\t\w\j\4\1\h\6\z\3\4\t\a\i\0\t\k\5\i\7\y\q\6\p\f\o\z\y\d\k\r\9\n\5\l\2\f\f\6\r\w\b\i\j\9\0\n\y\b\q\b\y\s\a\a\o\r\8\h\b\6\b\n\v\0\c\i\s\k\f\j\6\k\d\c\s\x\y\j\s\e\r\u\y\d\d\n\k\d\t\7\z\x\v\d\t\i\z\6\8\y\c\q\4\b\y\q\q\t\8\w\g\8\4\j\h\1\d\g\h\c\u\9\1\p\z\t\4\4\r\i\d\m\l\u\s\5\e\q\c\8\f\t\s\v\v\d\l\u\f\6\3\u\l\v\v\v\9\t\k\2\l\i\5\0\2\h\s\b\b\n\3\t\m\q\y\f\m\h\r\v\j\f\q\p\7\d\e\5\j\v\b\p\n\7\k\e\v\i\x\z\m\g\1\l\0\w\o\x\g\t\l\7\y\w\0\c\c\q\r\d\e\3\y\n\y\y\6\y\9\1\n\q\l\j\7\5\8\z\s\x\5\r\z\i\c\1\e\5\a\y\7\u\s\v\7\q\v\k\g\a\q\i\f\1\6\t\j\6\z\a\r\4\n\1\l\v\h\q\9\5\j\1\7\i\b\t\5\2\o\w\q\k\h\b\0\s\a\v\h\w\g\f\d\f\q\r\k\9\8\v\a\z\k\p\k\l\2\2\1\5\t\q\7\q\x\g\4\t\f\x\d\v\z\v\6\f\6\7\2\e\h\t\1\h\r\u\2\4\x\i\v\1\z\t\d\s\j\b\p\m\w\x\c\w\j\0\g\o\z\y\l\c\2\t\m\b\k\g\o\q\8\b\z\9\b\9\i\t\5\e\e\p\o\w\9\p\u\7\z\j\5\h\s\r\t\5\d\s\u\e\3\p\f\4\h\s\0\0\x\x\u\9\t\6\8\7\1\m\9\2\u\v\n\5\d\g\x\l\y\a\9\l\8\a\z\0\a\m\x\z\m\6\5\o\4\b\z\d\j\7\a\n\x\k\e\5\w\0\r\i\7\l\0\m\1\8\7\u\1\p\w\9\h\x\i\t\9\f\j\n\q\n\3\a\b\7\n\w\c\1\1\f\g\4\k\b\4\q\n\p\e\i\t\3\q\y\8\m\i\r\l\u\1\8\m\s\n\9\7\c\3\i\0\y\a\k\k\z\q\7\u\j\b\z\d\y\y\z\z\z\j\8\u\q\a\3\k\y\1\m\h\8\0\j\9\n\r\k\o\5\w\g\6\3\y\s\x\f\v\c\q\f\0\y\0\7\k\c\3\u\i\p\o\a\t\p\n\g\z\q\t\6\4\e\t\x\w\i\3\7\s\f\w\q\t\d\u\i\9\9\c\z\5\x\d\d\8\h\m\0\2\g\p\1\2\e\d\x\p\r\j\5\6\r\i\j\x\h\c\w\n\7\j\6\g\i\y\r\9\q\8\8\s\g\3\t\v\y\g\g\e\f\0\p\1\2\w\t\c\m\w\7\i\r\v\f\w\8\j\4\k\t\7\8\m\v\d\s\p\6\3\m\j\g\r\y\n\b\i\y\h\c\i\8\b\c\e\x\i\l\p\q\y\q\t\h\7\2\9\y\4\e\r\8\s\r\v\b\j\k\z\s\s\z\t\x\p\p\8\8\x\e\k\s\6\a\y\t\x\a\2\a\3\r\j\x\c\f\0\e\q\c\7\s\y\9\7\k\m\i\0\0\3\f\p\7\s\4\b\x\k\k\e\q\b\l\e\b\l\d\r\8\y\y\k\a\w\3\p\n\0\3\z\6\o\z\4\m\0\g\2\r\r\8\c\u\g\5\2\o\s\x\e\d\v\4\9\l\5\e\0\a\h\n\o\s\f\a\s\1\u\x\q\r\a\p\p\d\1\6\f\j\u\r\c\0\a\p\d\w\m\t\4\g\s\s\1\q ]] 00:06:51.325 08:19:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:51.325 08:19:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 2zzy60i96j6v9zx0q28xlda99x05t1vwugaw29lo2w1v8t1ksujdn30f80sndab205p5stlceesoknbormv7zwhw8uarxw0m8vkkhg3n01cisd8csahgp9ja6k64crk6yhrjbxexij5y8p6x8etwj41h6z34tai0tk5i7yq6pfozydkr9n5l2ff6rwbij90nybqbysaaor8hb6bnv0ciskfj6kdcsxyjseruyddnkdt7zxvdtiz68ycq4byqqt8wg84jh1dghcu91pzt44ridmlus5eqc8ftsvvdluf63ulvvv9tk2li502hsbbn3tmqyfmhrvjfqp7de5jvbpn7kevixzmg1l0woxgtl7yw0ccqrde3ynyy6y91nqlj758zsx5rzic1e5ay7usv7qvkgaqif16tj6zar4n1lvhq95j17ibt52owqkhb0savhwgfdfqrk98vazkpkl2215tq7qxg4tfxdvzv6f672eht1hru24xiv1ztdsjbpmwxcwj0gozylc2tmbkgoq8bz9b9it5eepow9pu7zj5hsrt5dsue3pf4hs00xxu9t6871m92uvn5dgxlya9l8az0amxzm65o4bzdj7anxke5w0ri7l0m187u1pw9hxit9fjnqn3ab7nwc11fg4kb4qnpeit3qy8mirlu18msn97c3i0yakkzq7ujbzdyyzzzj8uqa3ky1mh80j9nrko5wg63ysxfvcqf0y07kc3uipoatpngzqt64etxwi37sfwqtdui99cz5xdd8hm02gp12edxprj56rijxhcwn7j6giyr9q88sg3tvyggef0p12wtcmw7irvfw8j4kt78mvdsp63mjgrynbiyhci8bcexilpqyqth729y4er8srvbjkzssztxpp88xeks6aytxa2a3rjxcf0eqc7sy97kmi003fp7s4bxkkeqblebldr8yykaw3pn03z6oz4m0g2rr8cug52osxedv49l5e0ahnosfas1uxqrappd16fjurc0apdwmt4gss1q == \2\z\z\y\6\0\i\9\6\j\6\v\9\z\x\0\q\2\8\x\l\d\a\9\9\x\0\5\t\1\v\w\u\g\a\w\2\9\l\o\2\w\1\v\8\t\1\k\s\u\j\d\n\3\0\f\8\0\s\n\d\a\b\2\0\5\p\5\s\t\l\c\e\e\s\o\k\n\b\o\r\m\v\7\z\w\h\w\8\u\a\r\x\w\0\m\8\v\k\k\h\g\3\n\0\1\c\i\s\d\8\c\s\a\h\g\p\9\j\a\6\k\6\4\c\r\k\6\y\h\r\j\b\x\e\x\i\j\5\y\8\p\6\x\8\e\t\w\j\4\1\h\6\z\3\4\t\a\i\0\t\k\5\i\7\y\q\6\p\f\o\z\y\d\k\r\9\n\5\l\2\f\f\6\r\w\b\i\j\9\0\n\y\b\q\b\y\s\a\a\o\r\8\h\b\6\b\n\v\0\c\i\s\k\f\j\6\k\d\c\s\x\y\j\s\e\r\u\y\d\d\n\k\d\t\7\z\x\v\d\t\i\z\6\8\y\c\q\4\b\y\q\q\t\8\w\g\8\4\j\h\1\d\g\h\c\u\9\1\p\z\t\4\4\r\i\d\m\l\u\s\5\e\q\c\8\f\t\s\v\v\d\l\u\f\6\3\u\l\v\v\v\9\t\k\2\l\i\5\0\2\h\s\b\b\n\3\t\m\q\y\f\m\h\r\v\j\f\q\p\7\d\e\5\j\v\b\p\n\7\k\e\v\i\x\z\m\g\1\l\0\w\o\x\g\t\l\7\y\w\0\c\c\q\r\d\e\3\y\n\y\y\6\y\9\1\n\q\l\j\7\5\8\z\s\x\5\r\z\i\c\1\e\5\a\y\7\u\s\v\7\q\v\k\g\a\q\i\f\1\6\t\j\6\z\a\r\4\n\1\l\v\h\q\9\5\j\1\7\i\b\t\5\2\o\w\q\k\h\b\0\s\a\v\h\w\g\f\d\f\q\r\k\9\8\v\a\z\k\p\k\l\2\2\1\5\t\q\7\q\x\g\4\t\f\x\d\v\z\v\6\f\6\7\2\e\h\t\1\h\r\u\2\4\x\i\v\1\z\t\d\s\j\b\p\m\w\x\c\w\j\0\g\o\z\y\l\c\2\t\m\b\k\g\o\q\8\b\z\9\b\9\i\t\5\e\e\p\o\w\9\p\u\7\z\j\5\h\s\r\t\5\d\s\u\e\3\p\f\4\h\s\0\0\x\x\u\9\t\6\8\7\1\m\9\2\u\v\n\5\d\g\x\l\y\a\9\l\8\a\z\0\a\m\x\z\m\6\5\o\4\b\z\d\j\7\a\n\x\k\e\5\w\0\r\i\7\l\0\m\1\8\7\u\1\p\w\9\h\x\i\t\9\f\j\n\q\n\3\a\b\7\n\w\c\1\1\f\g\4\k\b\4\q\n\p\e\i\t\3\q\y\8\m\i\r\l\u\1\8\m\s\n\9\7\c\3\i\0\y\a\k\k\z\q\7\u\j\b\z\d\y\y\z\z\z\j\8\u\q\a\3\k\y\1\m\h\8\0\j\9\n\r\k\o\5\w\g\6\3\y\s\x\f\v\c\q\f\0\y\0\7\k\c\3\u\i\p\o\a\t\p\n\g\z\q\t\6\4\e\t\x\w\i\3\7\s\f\w\q\t\d\u\i\9\9\c\z\5\x\d\d\8\h\m\0\2\g\p\1\2\e\d\x\p\r\j\5\6\r\i\j\x\h\c\w\n\7\j\6\g\i\y\r\9\q\8\8\s\g\3\t\v\y\g\g\e\f\0\p\1\2\w\t\c\m\w\7\i\r\v\f\w\8\j\4\k\t\7\8\m\v\d\s\p\6\3\m\j\g\r\y\n\b\i\y\h\c\i\8\b\c\e\x\i\l\p\q\y\q\t\h\7\2\9\y\4\e\r\8\s\r\v\b\j\k\z\s\s\z\t\x\p\p\8\8\x\e\k\s\6\a\y\t\x\a\2\a\3\r\j\x\c\f\0\e\q\c\7\s\y\9\7\k\m\i\0\0\3\f\p\7\s\4\b\x\k\k\e\q\b\l\e\b\l\d\r\8\y\y\k\a\w\3\p\n\0\3\z\6\o\z\4\m\0\g\2\r\r\8\c\u\g\5\2\o\s\x\e\d\v\4\9\l\5\e\0\a\h\n\o\s\f\a\s\1\u\x\q\r\a\p\p\d\1\6\f\j\u\r\c\0\a\p\d\w\m\t\4\g\s\s\1\q ]] 00:06:51.325 08:19:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:51.584 08:19:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:51.584 08:19:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:06:51.584 08:19:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:51.584 08:19:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:51.584 { 00:06:51.584 "subsystems": [ 00:06:51.584 { 00:06:51.584 "subsystem": "bdev", 00:06:51.584 "config": [ 00:06:51.584 { 00:06:51.584 "params": { 00:06:51.584 "block_size": 512, 00:06:51.584 "num_blocks": 1048576, 00:06:51.584 "name": "malloc0" 00:06:51.584 }, 00:06:51.584 "method": "bdev_malloc_create" 00:06:51.584 }, 00:06:51.584 { 00:06:51.584 "params": { 00:06:51.584 "filename": "/dev/zram1", 00:06:51.584 "name": "uring0" 00:06:51.584 }, 00:06:51.584 "method": "bdev_uring_create" 00:06:51.584 }, 00:06:51.584 { 00:06:51.584 "method": "bdev_wait_for_examine" 00:06:51.584 } 00:06:51.584 ] 00:06:51.584 } 00:06:51.584 ] 00:06:51.584 } 00:06:51.584 [2024-11-20 08:19:39.090680] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:51.584 [2024-11-20 08:19:39.090792] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61057 ] 00:06:51.843 [2024-11-20 08:19:39.239376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.843 [2024-11-20 08:19:39.298856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.843 [2024-11-20 08:19:39.353566] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.219  [2024-11-20T08:19:41.716Z] Copying: 169/512 [MB] (169 MBps) [2024-11-20T08:19:42.652Z] Copying: 337/512 [MB] (167 MBps) [2024-11-20T08:19:42.652Z] Copying: 504/512 [MB] (167 MBps) [2024-11-20T08:19:43.217Z] Copying: 512/512 [MB] (average 168 MBps) 00:06:55.656 00:06:55.656 08:19:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:06:55.656 08:19:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:06:55.656 08:19:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:55.656 08:19:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:55.656 08:19:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:06:55.656 08:19:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:06:55.656 08:19:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:55.656 08:19:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:55.656 [2024-11-20 08:19:43.057710] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:55.656 [2024-11-20 08:19:43.057847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61113 ] 00:06:55.656 { 00:06:55.656 "subsystems": [ 00:06:55.656 { 00:06:55.656 "subsystem": "bdev", 00:06:55.656 "config": [ 00:06:55.656 { 00:06:55.656 "params": { 00:06:55.656 "block_size": 512, 00:06:55.656 "num_blocks": 1048576, 00:06:55.656 "name": "malloc0" 00:06:55.656 }, 00:06:55.656 "method": "bdev_malloc_create" 00:06:55.656 }, 00:06:55.657 { 00:06:55.657 "params": { 00:06:55.657 "filename": "/dev/zram1", 00:06:55.657 "name": "uring0" 00:06:55.657 }, 00:06:55.657 "method": "bdev_uring_create" 00:06:55.657 }, 00:06:55.657 { 00:06:55.657 "params": { 00:06:55.657 "name": "uring0" 00:06:55.657 }, 00:06:55.657 "method": "bdev_uring_delete" 00:06:55.657 }, 00:06:55.657 { 00:06:55.657 "method": "bdev_wait_for_examine" 00:06:55.657 } 00:06:55.657 ] 00:06:55.657 } 00:06:55.657 ] 00:06:55.657 } 00:06:55.657 [2024-11-20 08:19:43.209528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.914 [2024-11-20 08:19:43.272342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.914 [2024-11-20 08:19:43.331395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.171  [2024-11-20T08:19:43.991Z] Copying: 0/0 [B] (average 0 Bps) 00:06:56.430 00:06:56.430 08:19:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:06:56.430 08:19:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:56.430 08:19:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:06:56.430 08:19:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:56.430 08:19:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:56.430 08:19:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # local es=0 00:06:56.430 08:19:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:56.430 08:19:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.430 08:19:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:56.430 08:19:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.430 08:19:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:56.430 08:19:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.430 08:19:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:06:56.430 08:19:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.430 08:19:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:56.430 08:19:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:56.689 [2024-11-20 08:19:43.995515] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:56.689 [2024-11-20 08:19:43.995640] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61146 ] 00:06:56.689 { 00:06:56.689 "subsystems": [ 00:06:56.689 { 00:06:56.689 "subsystem": "bdev", 00:06:56.689 "config": [ 00:06:56.689 { 00:06:56.689 "params": { 00:06:56.689 "block_size": 512, 00:06:56.689 "num_blocks": 1048576, 00:06:56.689 "name": "malloc0" 00:06:56.689 }, 00:06:56.689 "method": "bdev_malloc_create" 00:06:56.689 }, 00:06:56.689 { 00:06:56.689 "params": { 00:06:56.689 "filename": "/dev/zram1", 00:06:56.689 "name": "uring0" 00:06:56.689 }, 00:06:56.689 "method": "bdev_uring_create" 00:06:56.689 }, 00:06:56.689 { 00:06:56.689 "params": { 00:06:56.689 "name": "uring0" 00:06:56.689 }, 00:06:56.689 "method": "bdev_uring_delete" 00:06:56.689 }, 00:06:56.689 { 00:06:56.689 "method": "bdev_wait_for_examine" 00:06:56.689 } 00:06:56.689 ] 00:06:56.689 } 00:06:56.689 ] 00:06:56.689 } 00:06:56.689 [2024-11-20 08:19:44.143657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.689 [2024-11-20 08:19:44.210864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.947 [2024-11-20 08:19:44.267529] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.947 [2024-11-20 08:19:44.483954] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:06:56.947 [2024-11-20 08:19:44.484017] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:06:56.947 [2024-11-20 08:19:44.484036] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:06:56.947 [2024-11-20 08:19:44.484054] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:57.522 [2024-11-20 08:19:44.819720] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:57.522 08:19:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@658 -- # es=237 00:06:57.522 08:19:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:06:57.522 08:19:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@667 -- # es=109 00:06:57.522 08:19:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # case "$es" in 00:06:57.522 08:19:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # es=1 00:06:57.522 08:19:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:06:57.522 08:19:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:06:57.522 08:19:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:06:57.522 08:19:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:06:57.522 08:19:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:06:57.522 08:19:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:06:57.522 08:19:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:57.789 00:06:57.789 real 0m15.464s 00:06:57.789 user 0m10.285s 00:06:57.789 sys 0m13.250s 00:06:57.789 08:19:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:57.789 08:19:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:57.789 ************************************ 00:06:57.789 END TEST dd_uring_copy 00:06:57.789 ************************************ 00:06:57.789 00:06:57.789 real 0m15.752s 00:06:57.789 user 0m10.434s 00:06:57.789 sys 0m13.395s 00:06:57.789 08:19:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:57.789 08:19:45 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:57.789 ************************************ 00:06:57.789 END TEST spdk_dd_uring 00:06:57.789 ************************************ 00:06:57.789 08:19:45 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:57.789 08:19:45 spdk_dd -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:57.789 08:19:45 spdk_dd -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:57.789 08:19:45 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:57.789 ************************************ 00:06:57.789 START TEST spdk_dd_sparse 00:06:57.789 ************************************ 00:06:57.789 08:19:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:58.049 * Looking for test storage... 00:06:58.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1638 -- # lcov --version 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:06:58.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.049 --rc genhtml_branch_coverage=1 00:06:58.049 --rc genhtml_function_coverage=1 00:06:58.049 --rc genhtml_legend=1 00:06:58.049 --rc geninfo_all_blocks=1 00:06:58.049 --rc geninfo_unexecuted_blocks=1 00:06:58.049 00:06:58.049 ' 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:06:58.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.049 --rc genhtml_branch_coverage=1 00:06:58.049 --rc genhtml_function_coverage=1 00:06:58.049 --rc genhtml_legend=1 00:06:58.049 --rc geninfo_all_blocks=1 00:06:58.049 --rc geninfo_unexecuted_blocks=1 00:06:58.049 00:06:58.049 ' 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:06:58.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.049 --rc genhtml_branch_coverage=1 00:06:58.049 --rc genhtml_function_coverage=1 00:06:58.049 --rc genhtml_legend=1 00:06:58.049 --rc geninfo_all_blocks=1 00:06:58.049 --rc geninfo_unexecuted_blocks=1 00:06:58.049 00:06:58.049 ' 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:06:58.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.049 --rc genhtml_branch_coverage=1 00:06:58.049 --rc genhtml_function_coverage=1 00:06:58.049 --rc genhtml_legend=1 00:06:58.049 --rc geninfo_all_blocks=1 00:06:58.049 --rc geninfo_unexecuted_blocks=1 00:06:58.049 00:06:58.049 ' 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:06:58.049 08:19:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:06:58.049 1+0 records in 00:06:58.050 1+0 records out 00:06:58.050 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00664137 s, 632 MB/s 00:06:58.050 08:19:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:06:58.050 1+0 records in 00:06:58.050 1+0 records out 00:06:58.050 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00757498 s, 554 MB/s 00:06:58.050 08:19:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:06:58.050 1+0 records in 00:06:58.050 1+0 records out 00:06:58.050 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00787055 s, 533 MB/s 00:06:58.050 08:19:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:06:58.050 08:19:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:58.050 08:19:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:58.050 08:19:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:58.050 ************************************ 00:06:58.050 START TEST dd_sparse_file_to_file 00:06:58.050 ************************************ 00:06:58.050 08:19:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1132 -- # file_to_file 00:06:58.050 08:19:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:06:58.050 08:19:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:06:58.050 08:19:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:58.050 08:19:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:06:58.050 08:19:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:06:58.050 08:19:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:06:58.050 08:19:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:06:58.050 08:19:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:06:58.050 08:19:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:58.050 08:19:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:58.309 [2024-11-20 08:19:45.653381] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:58.309 [2024-11-20 08:19:45.653501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61250 ] 00:06:58.309 { 00:06:58.309 "subsystems": [ 00:06:58.309 { 00:06:58.309 "subsystem": "bdev", 00:06:58.309 "config": [ 00:06:58.309 { 00:06:58.309 "params": { 00:06:58.309 "block_size": 4096, 00:06:58.309 "filename": "dd_sparse_aio_disk", 00:06:58.309 "name": "dd_aio" 00:06:58.309 }, 00:06:58.309 "method": "bdev_aio_create" 00:06:58.309 }, 00:06:58.309 { 00:06:58.309 "params": { 00:06:58.309 "lvs_name": "dd_lvstore", 00:06:58.309 "bdev_name": "dd_aio" 00:06:58.309 }, 00:06:58.309 "method": "bdev_lvol_create_lvstore" 00:06:58.309 }, 00:06:58.309 { 00:06:58.309 "method": "bdev_wait_for_examine" 00:06:58.309 } 00:06:58.309 ] 00:06:58.309 } 00:06:58.309 ] 00:06:58.309 } 00:06:58.309 [2024-11-20 08:19:45.810389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.569 [2024-11-20 08:19:45.880717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.569 [2024-11-20 08:19:45.942080] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.569  [2024-11-20T08:19:46.389Z] Copying: 12/36 [MB] (average 857 MBps) 00:06:58.828 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:58.828 00:06:58.828 real 0m0.707s 00:06:58.828 user 0m0.445s 00:06:58.828 sys 0m0.382s 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:58.828 ************************************ 00:06:58.828 END TEST dd_sparse_file_to_file 00:06:58.828 ************************************ 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:58.828 ************************************ 00:06:58.828 START TEST dd_sparse_file_to_bdev 00:06:58.828 ************************************ 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1132 -- # file_to_bdev 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:58.828 08:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:59.087 [2024-11-20 08:19:46.408651] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:59.087 [2024-11-20 08:19:46.408786] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61298 ] 00:06:59.087 { 00:06:59.087 "subsystems": [ 00:06:59.087 { 00:06:59.087 "subsystem": "bdev", 00:06:59.087 "config": [ 00:06:59.087 { 00:06:59.087 "params": { 00:06:59.087 "block_size": 4096, 00:06:59.087 "filename": "dd_sparse_aio_disk", 00:06:59.087 "name": "dd_aio" 00:06:59.087 }, 00:06:59.087 "method": "bdev_aio_create" 00:06:59.087 }, 00:06:59.087 { 00:06:59.087 "params": { 00:06:59.087 "lvs_name": "dd_lvstore", 00:06:59.087 "lvol_name": "dd_lvol", 00:06:59.087 "size_in_mib": 36, 00:06:59.087 "thin_provision": true 00:06:59.087 }, 00:06:59.087 "method": "bdev_lvol_create" 00:06:59.087 }, 00:06:59.087 { 00:06:59.087 "method": "bdev_wait_for_examine" 00:06:59.087 } 00:06:59.087 ] 00:06:59.087 } 00:06:59.087 ] 00:06:59.087 } 00:06:59.087 [2024-11-20 08:19:46.561088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.087 [2024-11-20 08:19:46.635311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.345 [2024-11-20 08:19:46.697098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.345  [2024-11-20T08:19:47.165Z] Copying: 12/36 [MB] (average 521 MBps) 00:06:59.604 00:06:59.604 00:06:59.604 real 0m0.665s 00:06:59.604 user 0m0.441s 00:06:59.604 sys 0m0.353s 00:06:59.604 08:19:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1133 -- # xtrace_disable 00:06:59.604 08:19:47 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:59.604 ************************************ 00:06:59.604 END TEST dd_sparse_file_to_bdev 00:06:59.604 ************************************ 00:06:59.604 08:19:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:06:59.604 08:19:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:06:59.604 08:19:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1114 -- # xtrace_disable 00:06:59.604 08:19:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:59.604 ************************************ 00:06:59.604 START TEST dd_sparse_bdev_to_file 00:06:59.604 ************************************ 00:06:59.604 08:19:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1132 -- # bdev_to_file 00:06:59.604 08:19:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:06:59.604 08:19:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:06:59.604 08:19:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:59.604 08:19:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:06:59.604 08:19:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:06:59.604 08:19:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:06:59.604 08:19:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:59.604 08:19:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:59.604 [2024-11-20 08:19:47.128180] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:06:59.604 [2024-11-20 08:19:47.128310] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61330 ] 00:06:59.604 { 00:06:59.604 "subsystems": [ 00:06:59.604 { 00:06:59.604 "subsystem": "bdev", 00:06:59.604 "config": [ 00:06:59.604 { 00:06:59.604 "params": { 00:06:59.604 "block_size": 4096, 00:06:59.604 "filename": "dd_sparse_aio_disk", 00:06:59.604 "name": "dd_aio" 00:06:59.604 }, 00:06:59.604 "method": "bdev_aio_create" 00:06:59.604 }, 00:06:59.604 { 00:06:59.604 "method": "bdev_wait_for_examine" 00:06:59.604 } 00:06:59.604 ] 00:06:59.604 } 00:06:59.604 ] 00:06:59.604 } 00:06:59.863 [2024-11-20 08:19:47.282130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.863 [2024-11-20 08:19:47.348140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.863 [2024-11-20 08:19:47.409004] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.121  [2024-11-20T08:19:47.941Z] Copying: 12/36 [MB] (average 923 MBps) 00:07:00.380 00:07:00.380 08:19:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:00.380 08:19:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:00.380 08:19:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:00.380 08:19:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:00.380 08:19:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:00.380 08:19:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:00.380 08:19:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:00.380 08:19:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:00.380 08:19:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:00.380 08:19:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:00.380 00:07:00.380 real 0m0.670s 00:07:00.380 user 0m0.421s 00:07:00.380 sys 0m0.360s 00:07:00.380 08:19:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:00.380 08:19:47 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:00.380 ************************************ 00:07:00.380 END TEST dd_sparse_bdev_to_file 00:07:00.380 ************************************ 00:07:00.380 08:19:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:00.380 08:19:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:00.380 08:19:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:00.380 08:19:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:00.380 08:19:47 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:00.380 ************************************ 00:07:00.380 END TEST spdk_dd_sparse 00:07:00.380 ************************************ 00:07:00.380 00:07:00.380 real 0m2.494s 00:07:00.380 user 0m1.487s 00:07:00.380 sys 0m1.358s 00:07:00.380 08:19:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:00.380 08:19:47 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:00.380 08:19:47 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:00.380 08:19:47 spdk_dd -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:00.380 08:19:47 spdk_dd -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:00.380 08:19:47 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:00.380 ************************************ 00:07:00.380 START TEST spdk_dd_negative 00:07:00.380 ************************************ 00:07:00.381 08:19:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:00.641 * Looking for test storage... 00:07:00.641 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:00.641 08:19:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:07:00.641 08:19:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1638 -- # lcov --version 00:07:00.641 08:19:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:07:00.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.641 --rc genhtml_branch_coverage=1 00:07:00.641 --rc genhtml_function_coverage=1 00:07:00.641 --rc genhtml_legend=1 00:07:00.641 --rc geninfo_all_blocks=1 00:07:00.641 --rc geninfo_unexecuted_blocks=1 00:07:00.641 00:07:00.641 ' 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:07:00.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.641 --rc genhtml_branch_coverage=1 00:07:00.641 --rc genhtml_function_coverage=1 00:07:00.641 --rc genhtml_legend=1 00:07:00.641 --rc geninfo_all_blocks=1 00:07:00.641 --rc geninfo_unexecuted_blocks=1 00:07:00.641 00:07:00.641 ' 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:07:00.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.641 --rc genhtml_branch_coverage=1 00:07:00.641 --rc genhtml_function_coverage=1 00:07:00.641 --rc genhtml_legend=1 00:07:00.641 --rc geninfo_all_blocks=1 00:07:00.641 --rc geninfo_unexecuted_blocks=1 00:07:00.641 00:07:00.641 ' 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:07:00.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.641 --rc genhtml_branch_coverage=1 00:07:00.641 --rc genhtml_function_coverage=1 00:07:00.641 --rc genhtml_legend=1 00:07:00.641 --rc geninfo_all_blocks=1 00:07:00.641 --rc geninfo_unexecuted_blocks=1 00:07:00.641 00:07:00.641 ' 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:00.641 ************************************ 00:07:00.641 START TEST dd_invalid_arguments 00:07:00.641 ************************************ 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1132 -- # invalid_arguments 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # local es=0 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.641 08:19:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:00.642 08:19:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:00.642 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:00.642 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:00.642 00:07:00.642 CPU options: 00:07:00.642 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:00.642 (like [0,1,10]) 00:07:00.642 --lcores lcore to CPU mapping list. The list is in the format: 00:07:00.642 [<,lcores[@CPUs]>...] 00:07:00.642 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:00.642 Within the group, '-' is used for range separator, 00:07:00.642 ',' is used for single number separator. 00:07:00.642 '( )' can be omitted for single element group, 00:07:00.642 '@' can be omitted if cpus and lcores have the same value 00:07:00.642 --disable-cpumask-locks Disable CPU core lock files. 00:07:00.642 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:00.642 pollers in the app support interrupt mode) 00:07:00.642 -p, --main-core main (primary) core for DPDK 00:07:00.642 00:07:00.642 Configuration options: 00:07:00.642 -c, --config, --json JSON config file 00:07:00.642 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:00.642 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:00.642 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:00.642 --rpcs-allowed comma-separated list of permitted RPCS 00:07:00.642 --json-ignore-init-errors don't exit on invalid config entry 00:07:00.642 00:07:00.642 Memory options: 00:07:00.642 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:00.642 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:00.642 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:00.642 -R, --huge-unlink unlink huge files after initialization 00:07:00.642 -n, --mem-channels number of memory channels used for DPDK 00:07:00.642 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:00.642 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:00.642 --no-huge run without using hugepages 00:07:00.642 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:07:00.642 -i, --shm-id shared memory ID (optional) 00:07:00.642 -g, --single-file-segments force creating just one hugetlbfs file 00:07:00.642 00:07:00.642 PCI options: 00:07:00.642 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:00.642 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:00.642 -u, --no-pci disable PCI access 00:07:00.642 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:00.642 00:07:00.642 Log options: 00:07:00.642 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:00.642 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:00.642 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:00.642 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:00.642 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:07:00.642 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:07:00.642 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:07:00.642 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:07:00.642 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:07:00.642 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:07:00.642 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:07:00.642 --silence-noticelog disable notice level logging to stderr 00:07:00.642 00:07:00.642 Trace options: 00:07:00.642 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:00.642 setting 0 to disable trace (default 32768) 00:07:00.642 Tracepoints vary in size and can use more than one trace entry. 00:07:00.642 -e, --tpoint-group [:] 00:07:00.642 [2024-11-20 08:19:48.169082] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:00.642 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:00.642 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:07:00.642 bdev_raid, scheduler, all). 00:07:00.642 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:00.642 a tracepoint group. First tpoint inside a group can be enabled by 00:07:00.642 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:00.642 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:00.642 in /include/spdk_internal/trace_defs.h 00:07:00.642 00:07:00.642 Other options: 00:07:00.642 -h, --help show this usage 00:07:00.642 -v, --version print SPDK version 00:07:00.642 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:00.642 --env-context Opaque context for use of the env implementation 00:07:00.642 00:07:00.642 Application specific: 00:07:00.642 [--------- DD Options ---------] 00:07:00.642 --if Input file. Must specify either --if or --ib. 00:07:00.642 --ib Input bdev. Must specifier either --if or --ib 00:07:00.642 --of Output file. Must specify either --of or --ob. 00:07:00.642 --ob Output bdev. Must specify either --of or --ob. 00:07:00.642 --iflag Input file flags. 00:07:00.642 --oflag Output file flags. 00:07:00.642 --bs I/O unit size (default: 4096) 00:07:00.642 --qd Queue depth (default: 2) 00:07:00.642 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:00.642 --skip Skip this many I/O units at start of input. (default: 0) 00:07:00.642 --seek Skip this many I/O units at start of output. (default: 0) 00:07:00.642 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:00.642 --sparse Enable hole skipping in input target 00:07:00.642 Available iflag and oflag values: 00:07:00.642 append - append mode 00:07:00.642 direct - use direct I/O for data 00:07:00.642 directory - fail unless a directory 00:07:00.642 dsync - use synchronized I/O for data 00:07:00.642 noatime - do not update access time 00:07:00.642 noctty - do not assign controlling terminal from file 00:07:00.642 nofollow - do not follow symlinks 00:07:00.642 nonblock - use non-blocking I/O 00:07:00.642 sync - use synchronized I/O for data and metadata 00:07:00.642 08:19:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@658 -- # es=2 00:07:00.642 ************************************ 00:07:00.642 END TEST dd_invalid_arguments 00:07:00.642 ************************************ 00:07:00.642 08:19:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:07:00.642 08:19:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:07:00.642 08:19:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:07:00.642 00:07:00.642 real 0m0.085s 00:07:00.642 user 0m0.050s 00:07:00.642 sys 0m0.032s 00:07:00.642 08:19:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:00.642 08:19:48 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:00.901 ************************************ 00:07:00.901 START TEST dd_double_input 00:07:00.901 ************************************ 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1132 -- # double_input 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # local es=0 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:00.901 [2024-11-20 08:19:48.310876] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@658 -- # es=22 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:07:00.901 00:07:00.901 real 0m0.085s 00:07:00.901 user 0m0.052s 00:07:00.901 sys 0m0.030s 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:00.901 ************************************ 00:07:00.901 END TEST dd_double_input 00:07:00.901 ************************************ 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:00.901 ************************************ 00:07:00.901 START TEST dd_double_output 00:07:00.901 ************************************ 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1132 -- # double_output 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # local es=0 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:00.901 08:19:48 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:00.901 [2024-11-20 08:19:48.447618] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@658 -- # es=22 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:07:01.160 00:07:01.160 real 0m0.087s 00:07:01.160 user 0m0.054s 00:07:01.160 sys 0m0.031s 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:01.160 ************************************ 00:07:01.160 END TEST dd_double_output 00:07:01.160 ************************************ 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:01.160 ************************************ 00:07:01.160 START TEST dd_no_input 00:07:01.160 ************************************ 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1132 -- # no_input 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # local es=0 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:01.160 [2024-11-20 08:19:48.582205] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@658 -- # es=22 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:07:01.160 ************************************ 00:07:01.160 END TEST dd_no_input 00:07:01.160 ************************************ 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:07:01.160 00:07:01.160 real 0m0.077s 00:07:01.160 user 0m0.049s 00:07:01.160 sys 0m0.027s 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:01.160 ************************************ 00:07:01.160 START TEST dd_no_output 00:07:01.160 ************************************ 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1132 -- # no_output 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # local es=0 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.160 08:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:01.160 [2024-11-20 08:19:48.715337] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@658 -- # es=22 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:07:01.419 00:07:01.419 real 0m0.080s 00:07:01.419 user 0m0.045s 00:07:01.419 sys 0m0.034s 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:01.419 ************************************ 00:07:01.419 END TEST dd_no_output 00:07:01.419 ************************************ 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:01.419 ************************************ 00:07:01.419 START TEST dd_wrong_blocksize 00:07:01.419 ************************************ 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1132 -- # wrong_blocksize 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # local es=0 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:01.419 [2024-11-20 08:19:48.854075] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@658 -- # es=22 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:07:01.419 00:07:01.419 real 0m0.080s 00:07:01.419 user 0m0.057s 00:07:01.419 sys 0m0.022s 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:01.419 ************************************ 00:07:01.419 END TEST dd_wrong_blocksize 00:07:01.419 ************************************ 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:01.419 ************************************ 00:07:01.419 START TEST dd_smaller_blocksize 00:07:01.419 ************************************ 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1132 -- # smaller_blocksize 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # local es=0 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.419 08:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:01.678 [2024-11-20 08:19:48.997396] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:01.678 [2024-11-20 08:19:48.997522] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61563 ] 00:07:01.678 [2024-11-20 08:19:49.146667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.678 [2024-11-20 08:19:49.207159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.935 [2024-11-20 08:19:49.265431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.193 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:02.452 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:02.452 [2024-11-20 08:19:49.877885] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:02.452 [2024-11-20 08:19:49.878026] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.452 [2024-11-20 08:19:49.995878] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:02.710 08:19:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@658 -- # es=244 00:07:02.710 08:19:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:07:02.710 08:19:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@667 -- # es=116 00:07:02.710 08:19:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # case "$es" in 00:07:02.710 08:19:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # es=1 00:07:02.710 08:19:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:07:02.710 00:07:02.710 real 0m1.136s 00:07:02.710 user 0m0.420s 00:07:02.710 sys 0m0.606s 00:07:02.710 08:19:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:02.710 ************************************ 00:07:02.710 END TEST dd_smaller_blocksize 00:07:02.711 ************************************ 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:02.711 ************************************ 00:07:02.711 START TEST dd_invalid_count 00:07:02.711 ************************************ 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1132 -- # invalid_count 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # local es=0 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:02.711 [2024-11-20 08:19:50.183934] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@658 -- # es=22 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:07:02.711 00:07:02.711 real 0m0.080s 00:07:02.711 user 0m0.049s 00:07:02.711 sys 0m0.029s 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:02.711 ************************************ 00:07:02.711 END TEST dd_invalid_count 00:07:02.711 ************************************ 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:02.711 ************************************ 00:07:02.711 START TEST dd_invalid_oflag 00:07:02.711 ************************************ 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1132 -- # invalid_oflag 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # local es=0 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:02.711 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:02.970 [2024-11-20 08:19:50.316734] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@658 -- # es=22 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:07:02.970 00:07:02.970 real 0m0.079s 00:07:02.970 user 0m0.039s 00:07:02.970 sys 0m0.038s 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:02.970 ************************************ 00:07:02.970 END TEST dd_invalid_oflag 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:02.970 ************************************ 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:02.970 ************************************ 00:07:02.970 START TEST dd_invalid_iflag 00:07:02.970 ************************************ 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1132 -- # invalid_iflag 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # local es=0 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:02.970 [2024-11-20 08:19:50.450714] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@658 -- # es=22 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:07:02.970 00:07:02.970 real 0m0.083s 00:07:02.970 user 0m0.050s 00:07:02.970 sys 0m0.032s 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:02.970 ************************************ 00:07:02.970 END TEST dd_invalid_iflag 00:07:02.970 ************************************ 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:02.970 ************************************ 00:07:02.970 START TEST dd_unknown_flag 00:07:02.970 ************************************ 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1132 -- # unknown_flag 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # local es=0 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:02.970 08:19:50 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.243 08:19:50 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:03.243 08:19:50 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.243 08:19:50 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:03.243 08:19:50 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.243 08:19:50 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:03.243 08:19:50 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.243 08:19:50 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:03.243 08:19:50 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:03.243 [2024-11-20 08:19:50.590114] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:03.243 [2024-11-20 08:19:50.590248] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61666 ] 00:07:03.243 [2024-11-20 08:19:50.740752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.243 [2024-11-20 08:19:50.789704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.501 [2024-11-20 08:19:50.842265] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.501 [2024-11-20 08:19:50.875490] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:03.501 [2024-11-20 08:19:50.875585] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.501 [2024-11-20 08:19:50.875638] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:03.501 [2024-11-20 08:19:50.875651] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.501 [2024-11-20 08:19:50.875960] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:03.501 [2024-11-20 08:19:50.875978] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.501 [2024-11-20 08:19:50.876031] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:03.501 [2024-11-20 08:19:50.876042] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:03.501 [2024-11-20 08:19:50.989585] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:03.501 08:19:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@658 -- # es=234 00:07:03.501 08:19:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:07:03.501 08:19:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@667 -- # es=106 00:07:03.501 08:19:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # case "$es" in 00:07:03.501 08:19:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # es=1 00:07:03.501 08:19:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:07:03.501 00:07:03.501 real 0m0.530s 00:07:03.501 user 0m0.277s 00:07:03.501 sys 0m0.155s 00:07:03.501 08:19:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:03.501 08:19:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:03.501 ************************************ 00:07:03.501 END TEST dd_unknown_flag 00:07:03.501 ************************************ 00:07:03.758 08:19:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:07:03.759 08:19:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:03.759 08:19:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:03.759 08:19:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:03.759 ************************************ 00:07:03.759 START TEST dd_invalid_json 00:07:03.759 ************************************ 00:07:03.759 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1132 -- # invalid_json 00:07:03.759 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:03.759 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # local es=0 00:07:03.759 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:03.759 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:07:03.759 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.759 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:03.759 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.759 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:03.759 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.759 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:03.759 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.759 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:03.759 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:03.759 [2024-11-20 08:19:51.178028] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:03.759 [2024-11-20 08:19:51.178137] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61689 ] 00:07:04.016 [2024-11-20 08:19:51.325131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.016 [2024-11-20 08:19:51.379314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.016 [2024-11-20 08:19:51.379386] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:04.016 [2024-11-20 08:19:51.379401] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:04.016 [2024-11-20 08:19:51.379409] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:04.016 [2024-11-20 08:19:51.379442] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:04.016 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@658 -- # es=234 00:07:04.016 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:07:04.016 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@667 -- # es=106 00:07:04.016 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # case "$es" in 00:07:04.016 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # es=1 00:07:04.016 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:07:04.016 00:07:04.016 real 0m0.329s 00:07:04.016 user 0m0.158s 00:07:04.016 sys 0m0.069s 00:07:04.016 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:04.016 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:04.016 ************************************ 00:07:04.016 END TEST dd_invalid_json 00:07:04.016 ************************************ 00:07:04.016 08:19:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:07:04.016 08:19:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:04.016 08:19:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:04.016 08:19:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:04.016 ************************************ 00:07:04.016 START TEST dd_invalid_seek 00:07:04.016 ************************************ 00:07:04.016 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1132 -- # invalid_seek 00:07:04.017 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:04.017 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:04.017 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:07:04.017 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:04.017 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:04.017 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:07:04.017 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:04.017 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # local es=0 00:07:04.017 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:04.017 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.017 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:07:04.017 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:07:04.017 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:04.017 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:04.017 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.017 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:04.017 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.017 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:04.017 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.017 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:04.017 08:19:51 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:04.017 { 00:07:04.017 "subsystems": [ 00:07:04.017 { 00:07:04.017 "subsystem": "bdev", 00:07:04.017 "config": [ 00:07:04.017 { 00:07:04.017 "params": { 00:07:04.017 "block_size": 512, 00:07:04.017 "num_blocks": 512, 00:07:04.017 "name": "malloc0" 00:07:04.017 }, 00:07:04.017 "method": "bdev_malloc_create" 00:07:04.017 }, 00:07:04.017 { 00:07:04.017 "params": { 00:07:04.017 "block_size": 512, 00:07:04.017 "num_blocks": 512, 00:07:04.017 "name": "malloc1" 00:07:04.017 }, 00:07:04.017 "method": "bdev_malloc_create" 00:07:04.017 }, 00:07:04.017 { 00:07:04.017 "method": "bdev_wait_for_examine" 00:07:04.017 } 00:07:04.017 ] 00:07:04.017 } 00:07:04.017 ] 00:07:04.017 } 00:07:04.017 [2024-11-20 08:19:51.560544] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:04.017 [2024-11-20 08:19:51.560632] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61724 ] 00:07:04.274 [2024-11-20 08:19:51.705355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.274 [2024-11-20 08:19:51.758118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.274 [2024-11-20 08:19:51.812621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.532 [2024-11-20 08:19:51.871482] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:07:04.533 [2024-11-20 08:19:51.871573] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:04.533 [2024-11-20 08:19:51.985153] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:04.533 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@658 -- # es=228 00:07:04.533 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:07:04.533 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@667 -- # es=100 00:07:04.533 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@668 -- # case "$es" in 00:07:04.533 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@675 -- # es=1 00:07:04.533 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:07:04.533 00:07:04.533 real 0m0.547s 00:07:04.533 user 0m0.351s 00:07:04.533 sys 0m0.152s 00:07:04.533 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:04.533 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:04.533 ************************************ 00:07:04.533 END TEST dd_invalid_seek 00:07:04.533 ************************************ 00:07:04.533 08:19:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:07:04.533 08:19:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:04.533 08:19:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:04.533 08:19:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:04.790 ************************************ 00:07:04.790 START TEST dd_invalid_skip 00:07:04.790 ************************************ 00:07:04.790 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1132 -- # invalid_skip 00:07:04.790 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:04.790 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:04.790 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:07:04.790 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:04.790 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:04.790 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:07:04.790 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:04.790 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # local es=0 00:07:04.790 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:07:04.790 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:04.790 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.790 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:07:04.790 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:04.790 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:04.790 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.790 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:04.790 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.790 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:04.790 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.790 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:04.790 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:04.790 { 00:07:04.790 "subsystems": [ 00:07:04.790 { 00:07:04.790 "subsystem": "bdev", 00:07:04.790 "config": [ 00:07:04.790 { 00:07:04.790 "params": { 00:07:04.790 "block_size": 512, 00:07:04.790 "num_blocks": 512, 00:07:04.790 "name": "malloc0" 00:07:04.790 }, 00:07:04.790 "method": "bdev_malloc_create" 00:07:04.791 }, 00:07:04.791 { 00:07:04.791 "params": { 00:07:04.791 "block_size": 512, 00:07:04.791 "num_blocks": 512, 00:07:04.791 "name": "malloc1" 00:07:04.791 }, 00:07:04.791 "method": "bdev_malloc_create" 00:07:04.791 }, 00:07:04.791 { 00:07:04.791 "method": "bdev_wait_for_examine" 00:07:04.791 } 00:07:04.791 ] 00:07:04.791 } 00:07:04.791 ] 00:07:04.791 } 00:07:04.791 [2024-11-20 08:19:52.163224] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:04.791 [2024-11-20 08:19:52.163336] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61752 ] 00:07:04.791 [2024-11-20 08:19:52.312456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.050 [2024-11-20 08:19:52.369700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.050 [2024-11-20 08:19:52.426763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.050 [2024-11-20 08:19:52.485813] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:07:05.050 [2024-11-20 08:19:52.485916] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.050 [2024-11-20 08:19:52.606570] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@658 -- # es=228 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@667 -- # es=100 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@668 -- # case "$es" in 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@675 -- # es=1 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:07:05.310 00:07:05.310 real 0m0.575s 00:07:05.310 user 0m0.376s 00:07:05.310 sys 0m0.158s 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:05.310 ************************************ 00:07:05.310 END TEST dd_invalid_skip 00:07:05.310 ************************************ 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:05.310 ************************************ 00:07:05.310 START TEST dd_invalid_input_count 00:07:05.310 ************************************ 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1132 -- # invalid_input_count 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # local es=0 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.310 08:19:52 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:05.310 { 00:07:05.310 "subsystems": [ 00:07:05.311 { 00:07:05.311 "subsystem": "bdev", 00:07:05.311 "config": [ 00:07:05.311 { 00:07:05.311 "params": { 00:07:05.311 "block_size": 512, 00:07:05.311 "num_blocks": 512, 00:07:05.311 "name": "malloc0" 00:07:05.311 }, 00:07:05.311 "method": "bdev_malloc_create" 00:07:05.311 }, 00:07:05.311 { 00:07:05.311 "params": { 00:07:05.311 "block_size": 512, 00:07:05.311 "num_blocks": 512, 00:07:05.311 "name": "malloc1" 00:07:05.311 }, 00:07:05.311 "method": "bdev_malloc_create" 00:07:05.311 }, 00:07:05.311 { 00:07:05.311 "method": "bdev_wait_for_examine" 00:07:05.311 } 00:07:05.311 ] 00:07:05.311 } 00:07:05.311 ] 00:07:05.311 } 00:07:05.311 [2024-11-20 08:19:52.798517] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:05.311 [2024-11-20 08:19:52.798656] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61791 ] 00:07:05.570 [2024-11-20 08:19:52.946734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.570 [2024-11-20 08:19:53.002911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.570 [2024-11-20 08:19:53.056324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.570 [2024-11-20 08:19:53.117419] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:07:05.570 [2024-11-20 08:19:53.117536] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.829 [2024-11-20 08:19:53.246471] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:05.829 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@658 -- # es=228 00:07:05.829 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:07:05.829 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@667 -- # es=100 00:07:05.829 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@668 -- # case "$es" in 00:07:05.829 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@675 -- # es=1 00:07:05.829 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:07:05.829 00:07:05.829 real 0m0.584s 00:07:05.829 user 0m0.386s 00:07:05.829 sys 0m0.157s 00:07:05.829 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:05.829 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:05.829 ************************************ 00:07:05.829 END TEST dd_invalid_input_count 00:07:05.829 ************************************ 00:07:05.829 08:19:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:07:05.829 08:19:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:05.829 08:19:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:05.829 08:19:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:05.829 ************************************ 00:07:05.829 START TEST dd_invalid_output_count 00:07:05.829 ************************************ 00:07:05.829 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1132 -- # invalid_output_count 00:07:05.829 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:05.829 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:05.829 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:07:05.829 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:05.829 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # local es=0 00:07:05.829 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:05.830 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.830 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:07:05.830 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:07:05.830 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:05.830 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:05.830 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.830 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:05.830 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.830 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:05.830 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.830 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.830 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:06.088 { 00:07:06.088 "subsystems": [ 00:07:06.088 { 00:07:06.088 "subsystem": "bdev", 00:07:06.088 "config": [ 00:07:06.088 { 00:07:06.088 "params": { 00:07:06.088 "block_size": 512, 00:07:06.088 "num_blocks": 512, 00:07:06.088 "name": "malloc0" 00:07:06.088 }, 00:07:06.088 "method": "bdev_malloc_create" 00:07:06.088 }, 00:07:06.088 { 00:07:06.088 "method": "bdev_wait_for_examine" 00:07:06.088 } 00:07:06.088 ] 00:07:06.088 } 00:07:06.088 ] 00:07:06.088 } 00:07:06.088 [2024-11-20 08:19:53.435701] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:06.088 [2024-11-20 08:19:53.435861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61824 ] 00:07:06.088 [2024-11-20 08:19:53.583371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.088 [2024-11-20 08:19:53.642083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.347 [2024-11-20 08:19:53.695665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.347 [2024-11-20 08:19:53.748139] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:07:06.347 [2024-11-20 08:19:53.748244] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.347 [2024-11-20 08:19:53.872224] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:06.605 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@658 -- # es=228 00:07:06.605 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:07:06.605 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@667 -- # es=100 00:07:06.605 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@668 -- # case "$es" in 00:07:06.605 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@675 -- # es=1 00:07:06.605 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:07:06.605 00:07:06.605 real 0m0.573s 00:07:06.605 user 0m0.378s 00:07:06.605 sys 0m0.154s 00:07:06.605 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:06.605 08:19:53 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:06.605 ************************************ 00:07:06.605 END TEST dd_invalid_output_count 00:07:06.605 ************************************ 00:07:06.605 08:19:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:07:06.605 08:19:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:06.605 08:19:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:06.605 08:19:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:06.605 ************************************ 00:07:06.605 START TEST dd_bs_not_multiple 00:07:06.605 ************************************ 00:07:06.605 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1132 -- # bs_not_multiple 00:07:06.606 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:06.606 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:06.606 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:07:06.606 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:06.606 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:06.606 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:07:06.606 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:06.606 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:07:06.606 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # local es=0 00:07:06.606 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:07:06.606 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:06.606 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:06.606 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.606 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:06.606 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.606 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:06.606 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.606 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:06.606 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.606 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.606 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:06.606 [2024-11-20 08:19:54.065260] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:06.606 [2024-11-20 08:19:54.065406] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61856 ] 00:07:06.606 { 00:07:06.606 "subsystems": [ 00:07:06.606 { 00:07:06.606 "subsystem": "bdev", 00:07:06.606 "config": [ 00:07:06.606 { 00:07:06.606 "params": { 00:07:06.606 "block_size": 512, 00:07:06.606 "num_blocks": 512, 00:07:06.606 "name": "malloc0" 00:07:06.606 }, 00:07:06.606 "method": "bdev_malloc_create" 00:07:06.606 }, 00:07:06.606 { 00:07:06.606 "params": { 00:07:06.606 "block_size": 512, 00:07:06.606 "num_blocks": 512, 00:07:06.606 "name": "malloc1" 00:07:06.606 }, 00:07:06.606 "method": "bdev_malloc_create" 00:07:06.606 }, 00:07:06.606 { 00:07:06.606 "method": "bdev_wait_for_examine" 00:07:06.606 } 00:07:06.606 ] 00:07:06.606 } 00:07:06.606 ] 00:07:06.606 } 00:07:06.864 [2024-11-20 08:19:54.210220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.864 [2024-11-20 08:19:54.264355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.864 [2024-11-20 08:19:54.318483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.864 [2024-11-20 08:19:54.382986] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:07:06.864 [2024-11-20 08:19:54.383042] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.122 [2024-11-20 08:19:54.508558] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:07.122 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@658 -- # es=234 00:07:07.122 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:07:07.122 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@667 -- # es=106 00:07:07.122 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@668 -- # case "$es" in 00:07:07.122 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@675 -- # es=1 00:07:07.122 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:07:07.122 00:07:07.122 real 0m0.574s 00:07:07.122 user 0m0.371s 00:07:07.122 sys 0m0.161s 00:07:07.122 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:07.122 08:19:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:07.122 ************************************ 00:07:07.122 END TEST dd_bs_not_multiple 00:07:07.122 ************************************ 00:07:07.122 ************************************ 00:07:07.122 END TEST spdk_dd_negative 00:07:07.122 ************************************ 00:07:07.122 00:07:07.122 real 0m6.756s 00:07:07.122 user 0m3.573s 00:07:07.122 sys 0m2.554s 00:07:07.122 08:19:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:07.122 08:19:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:07.122 00:07:07.122 real 1m19.534s 00:07:07.122 user 0m50.359s 00:07:07.122 sys 0m36.047s 00:07:07.122 08:19:54 spdk_dd -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:07.122 08:19:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:07.122 ************************************ 00:07:07.122 END TEST spdk_dd 00:07:07.122 ************************************ 00:07:07.380 08:19:54 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:07.380 08:19:54 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:07.380 08:19:54 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:07.380 08:19:54 -- common/autotest_common.sh@735 -- # xtrace_disable 00:07:07.380 08:19:54 -- common/autotest_common.sh@10 -- # set +x 00:07:07.380 08:19:54 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:07.380 08:19:54 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:07.380 08:19:54 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:07.380 08:19:54 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:07.380 08:19:54 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:07.380 08:19:54 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:07.380 08:19:54 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:07.380 08:19:54 -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:07:07.380 08:19:54 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:07.380 08:19:54 -- common/autotest_common.sh@10 -- # set +x 00:07:07.380 ************************************ 00:07:07.380 START TEST nvmf_tcp 00:07:07.380 ************************************ 00:07:07.380 08:19:54 nvmf_tcp -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:07.380 * Looking for test storage... 00:07:07.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:07.380 08:19:54 nvmf_tcp -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:07:07.380 08:19:54 nvmf_tcp -- common/autotest_common.sh@1638 -- # lcov --version 00:07:07.380 08:19:54 nvmf_tcp -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:07:07.638 08:19:54 nvmf_tcp -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.639 08:19:54 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:07.639 08:19:54 nvmf_tcp -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.639 08:19:54 nvmf_tcp -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:07:07.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.639 --rc genhtml_branch_coverage=1 00:07:07.639 --rc genhtml_function_coverage=1 00:07:07.639 --rc genhtml_legend=1 00:07:07.639 --rc geninfo_all_blocks=1 00:07:07.639 --rc geninfo_unexecuted_blocks=1 00:07:07.639 00:07:07.639 ' 00:07:07.639 08:19:54 nvmf_tcp -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:07:07.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.639 --rc genhtml_branch_coverage=1 00:07:07.639 --rc genhtml_function_coverage=1 00:07:07.639 --rc genhtml_legend=1 00:07:07.639 --rc geninfo_all_blocks=1 00:07:07.639 --rc geninfo_unexecuted_blocks=1 00:07:07.639 00:07:07.639 ' 00:07:07.639 08:19:54 nvmf_tcp -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:07:07.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.639 --rc genhtml_branch_coverage=1 00:07:07.639 --rc genhtml_function_coverage=1 00:07:07.639 --rc genhtml_legend=1 00:07:07.639 --rc geninfo_all_blocks=1 00:07:07.639 --rc geninfo_unexecuted_blocks=1 00:07:07.639 00:07:07.639 ' 00:07:07.639 08:19:54 nvmf_tcp -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:07:07.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.639 --rc genhtml_branch_coverage=1 00:07:07.639 --rc genhtml_function_coverage=1 00:07:07.639 --rc genhtml_legend=1 00:07:07.639 --rc geninfo_all_blocks=1 00:07:07.639 --rc geninfo_unexecuted_blocks=1 00:07:07.639 00:07:07.639 ' 00:07:07.639 08:19:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:07.639 08:19:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:07.639 08:19:54 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:07.639 08:19:54 nvmf_tcp -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:07:07.639 08:19:54 nvmf_tcp -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:07.639 08:19:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:07.639 ************************************ 00:07:07.639 START TEST nvmf_target_core 00:07:07.639 ************************************ 00:07:07.639 08:19:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:07.639 * Looking for test storage... 00:07:07.639 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1638 -- # lcov --version 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.639 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:07:07.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.898 --rc genhtml_branch_coverage=1 00:07:07.898 --rc genhtml_function_coverage=1 00:07:07.898 --rc genhtml_legend=1 00:07:07.898 --rc geninfo_all_blocks=1 00:07:07.898 --rc geninfo_unexecuted_blocks=1 00:07:07.898 00:07:07.898 ' 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:07:07.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.898 --rc genhtml_branch_coverage=1 00:07:07.898 --rc genhtml_function_coverage=1 00:07:07.898 --rc genhtml_legend=1 00:07:07.898 --rc geninfo_all_blocks=1 00:07:07.898 --rc geninfo_unexecuted_blocks=1 00:07:07.898 00:07:07.898 ' 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:07:07.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.898 --rc genhtml_branch_coverage=1 00:07:07.898 --rc genhtml_function_coverage=1 00:07:07.898 --rc genhtml_legend=1 00:07:07.898 --rc geninfo_all_blocks=1 00:07:07.898 --rc geninfo_unexecuted_blocks=1 00:07:07.898 00:07:07.898 ' 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:07:07.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.898 --rc genhtml_branch_coverage=1 00:07:07.898 --rc genhtml_function_coverage=1 00:07:07.898 --rc genhtml_legend=1 00:07:07.898 --rc geninfo_all_blocks=1 00:07:07.898 --rc geninfo_unexecuted_blocks=1 00:07:07.898 00:07:07.898 ' 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:07.898 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:07.898 08:19:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:07.899 ************************************ 00:07:07.899 START TEST nvmf_host_management 00:07:07.899 ************************************ 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:07.899 * Looking for test storage... 00:07:07.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1638 -- # lcov --version 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:07:07.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.899 --rc genhtml_branch_coverage=1 00:07:07.899 --rc genhtml_function_coverage=1 00:07:07.899 --rc genhtml_legend=1 00:07:07.899 --rc geninfo_all_blocks=1 00:07:07.899 --rc geninfo_unexecuted_blocks=1 00:07:07.899 00:07:07.899 ' 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:07:07.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.899 --rc genhtml_branch_coverage=1 00:07:07.899 --rc genhtml_function_coverage=1 00:07:07.899 --rc genhtml_legend=1 00:07:07.899 --rc geninfo_all_blocks=1 00:07:07.899 --rc geninfo_unexecuted_blocks=1 00:07:07.899 00:07:07.899 ' 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:07:07.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.899 --rc genhtml_branch_coverage=1 00:07:07.899 --rc genhtml_function_coverage=1 00:07:07.899 --rc genhtml_legend=1 00:07:07.899 --rc geninfo_all_blocks=1 00:07:07.899 --rc geninfo_unexecuted_blocks=1 00:07:07.899 00:07:07.899 ' 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:07:07.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.899 --rc genhtml_branch_coverage=1 00:07:07.899 --rc genhtml_function_coverage=1 00:07:07.899 --rc genhtml_legend=1 00:07:07.899 --rc geninfo_all_blocks=1 00:07:07.899 --rc geninfo_unexecuted_blocks=1 00:07:07.899 00:07:07.899 ' 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.899 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.157 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.157 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:07:08.157 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:07:08.157 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:08.158 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:08.158 Cannot find device "nvmf_init_br" 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:08.158 Cannot find device "nvmf_init_br2" 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:08.158 Cannot find device "nvmf_tgt_br" 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:08.158 Cannot find device "nvmf_tgt_br2" 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:08.158 Cannot find device "nvmf_init_br" 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:08.158 Cannot find device "nvmf_init_br2" 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:08.158 Cannot find device "nvmf_tgt_br" 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:08.158 Cannot find device "nvmf_tgt_br2" 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:08.158 Cannot find device "nvmf_br" 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:08.158 Cannot find device "nvmf_init_if" 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:08.158 Cannot find device "nvmf_init_if2" 00:07:08.158 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:07:08.159 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:08.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:08.159 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:07:08.159 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:08.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:08.159 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:07:08.159 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:08.159 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:08.159 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:08.159 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:08.159 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:08.159 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:08.159 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:08.159 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:08.159 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:08.159 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:08.159 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:08.159 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:08.416 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:08.416 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:07:08.416 00:07:08.416 --- 10.0.0.3 ping statistics --- 00:07:08.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.416 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:08.416 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:08.416 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:07:08.416 00:07:08.416 --- 10.0.0.4 ping statistics --- 00:07:08.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.416 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:08.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:07:08.416 00:07:08.416 --- 10.0.0.1 ping statistics --- 00:07:08.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.416 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:08.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:07:08.416 00:07:08.416 --- 10.0.0.2 ping statistics --- 00:07:08.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.416 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62221 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62221 00:07:08.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # '[' -z 62221 ']' 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@843 -- # local max_retries=100 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@847 -- # xtrace_disable 00:07:08.416 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.706 [2024-11-20 08:19:56.021901] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:08.706 [2024-11-20 08:19:56.021984] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.706 [2024-11-20 08:19:56.177973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.706 [2024-11-20 08:19:56.256998] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.706 [2024-11-20 08:19:56.257261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.706 [2024-11-20 08:19:56.257362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.706 [2024-11-20 08:19:56.257442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.706 [2024-11-20 08:19:56.257523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.706 [2024-11-20 08:19:56.259460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.706 [2024-11-20 08:19:56.259613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.706 [2024-11-20 08:19:56.259747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:08.706 [2024-11-20 08:19:56.259750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.964 [2024-11-20 08:19:56.319578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.964 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:07:08.964 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@871 -- # return 0 00:07:08.964 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:08.964 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@735 -- # xtrace_disable 00:07:08.964 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.964 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:08.964 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:08.964 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:08.964 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.964 [2024-11-20 08:19:56.446588] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.964 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:08.964 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:08.964 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:08.964 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.964 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:08.964 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:08.964 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:08.964 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:08.964 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.964 Malloc0 00:07:09.223 [2024-11-20 08:19:56.535188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:09.223 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:09.223 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:09.223 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@735 -- # xtrace_disable 00:07:09.223 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.223 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62273 00:07:09.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:09.223 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62273 /var/tmp/bdevperf.sock 00:07:09.223 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:09.223 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:09.223 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # '[' -z 62273 ']' 00:07:09.223 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:09.223 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:09.223 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:09.223 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:09.223 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@843 -- # local max_retries=100 00:07:09.223 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:09.223 { 00:07:09.223 "params": { 00:07:09.223 "name": "Nvme$subsystem", 00:07:09.223 "trtype": "$TEST_TRANSPORT", 00:07:09.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:09.223 "adrfam": "ipv4", 00:07:09.223 "trsvcid": "$NVMF_PORT", 00:07:09.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:09.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:09.223 "hdgst": ${hdgst:-false}, 00:07:09.223 "ddgst": ${ddgst:-false} 00:07:09.223 }, 00:07:09.223 "method": "bdev_nvme_attach_controller" 00:07:09.223 } 00:07:09.223 EOF 00:07:09.223 )") 00:07:09.223 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:09.223 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@847 -- # xtrace_disable 00:07:09.223 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.223 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:09.223 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:09.223 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:09.224 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:09.224 "params": { 00:07:09.224 "name": "Nvme0", 00:07:09.224 "trtype": "tcp", 00:07:09.224 "traddr": "10.0.0.3", 00:07:09.224 "adrfam": "ipv4", 00:07:09.224 "trsvcid": "4420", 00:07:09.224 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:09.224 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:09.224 "hdgst": false, 00:07:09.224 "ddgst": false 00:07:09.224 }, 00:07:09.224 "method": "bdev_nvme_attach_controller" 00:07:09.224 }' 00:07:09.224 [2024-11-20 08:19:56.637899] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:09.224 [2024-11-20 08:19:56.638216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62273 ] 00:07:09.483 [2024-11-20 08:19:56.787825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.483 [2024-11-20 08:19:56.855548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.483 [2024-11-20 08:19:56.921382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.483 Running I/O for 10 seconds... 00:07:09.742 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:07:09.742 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@871 -- # return 0 00:07:09.742 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:09.742 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:09.742 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.742 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:09.742 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:09.742 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:09.742 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:09.742 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:09.742 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:09.742 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:09.742 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:09.742 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:09.742 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:09.742 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:09.742 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:09.742 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.742 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:09.742 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:09.742 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:09.742 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:10.002 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:10.002 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:10.002 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:10.002 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:10.002 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.002 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:10.002 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:10.002 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:07:10.003 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:07:10.003 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:10.003 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:10.003 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:10.003 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:10.003 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:10.003 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.003 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:10.003 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:10.003 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@566 -- # xtrace_disable 00:07:10.003 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.003 [2024-11-20 08:19:57.481556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.481612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.481658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.481679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.481701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.481718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.481737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.481754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.481773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.481791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.481828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.481846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.481866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.481882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.481902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.481918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.481935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.481951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.481969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.481985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.482003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.482018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.482036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.482058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.482074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.482087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.482098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.482109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.482127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.482143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.482161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.482176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.482193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.482208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.482227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.482242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.482259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.482273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.482290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.482305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.482321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.482341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.482353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.482363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.482375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.482384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.482395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.482405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.482416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.482425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.482436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.482446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.482457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.482466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.482477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.482487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.482498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.482508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.482519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.482528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.482540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.003 [2024-11-20 08:19:57.482552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.003 [2024-11-20 08:19:57.482570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.482585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.482597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.482607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.482618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.482627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.482638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.482648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.482659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.482668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.482679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.482695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.482707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.482716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.482728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.482738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.482749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.482758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.482769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.482779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.482790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.482813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.482826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.482836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.482847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.482857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.482868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.482878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.482889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.482899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.482911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.482921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.482932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.482941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.482952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.482962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.482973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.482982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.482993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.483003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.483014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.483023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.483035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.483057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.483070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.483080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.483093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.483102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.483114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.483123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.483134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.483144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.483155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.483164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.483175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.483185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.483196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.483206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.483217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.483227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.483238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.483247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.483258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.483268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.483279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.004 [2024-11-20 08:19:57.483288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.483299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcd2d0 is same with the state(6) to be set 00:07:10.004 [2024-11-20 08:19:57.483480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:10.004 [2024-11-20 08:19:57.483497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.483509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:10.004 [2024-11-20 08:19:57.483518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.483528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:10.004 [2024-11-20 08:19:57.483537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.483549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:10.004 [2024-11-20 08:19:57.483565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.004 [2024-11-20 08:19:57.483584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd2ce0 is same with the state(6) to be set 00:07:10.004 task offset: 81920 on job bdev=Nvme0n1 fails 00:07:10.004 00:07:10.004 Latency(us) 00:07:10.004 [2024-11-20T08:19:57.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.004 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:10.004 Job: Nvme0n1 ended in about 0.44 seconds with error 00:07:10.004 Verification LBA range: start 0x0 length 0x400 00:07:10.004 Nvme0n1 : 0.44 1443.53 90.22 144.35 0.00 39001.94 2591.65 37891.72 00:07:10.005 [2024-11-20T08:19:57.566Z] =================================================================================================================== 00:07:10.005 [2024-11-20T08:19:57.566Z] Total : 1443.53 90.22 144.35 0.00 39001.94 2591.65 37891.72 00:07:10.005 [2024-11-20 08:19:57.484698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:10.005 [2024-11-20 08:19:57.486746] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:10.005 [2024-11-20 08:19:57.486772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd2ce0 (9): Bad file descriptor 00:07:10.005 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:07:10.005 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:10.005 [2024-11-20 08:19:57.497942] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:10.941 08:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62273 00:07:10.941 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62273) - No such process 00:07:10.941 08:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:10.941 08:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:11.200 08:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:11.200 08:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:11.200 08:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:11.200 08:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:11.200 08:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:11.200 08:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:11.200 { 00:07:11.200 "params": { 00:07:11.200 "name": "Nvme$subsystem", 00:07:11.200 "trtype": "$TEST_TRANSPORT", 00:07:11.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:11.200 "adrfam": "ipv4", 00:07:11.200 "trsvcid": "$NVMF_PORT", 00:07:11.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:11.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:11.200 "hdgst": ${hdgst:-false}, 00:07:11.200 "ddgst": ${ddgst:-false} 00:07:11.200 }, 00:07:11.200 "method": "bdev_nvme_attach_controller" 00:07:11.200 } 00:07:11.200 EOF 00:07:11.200 )") 00:07:11.200 08:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:11.200 08:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:11.200 08:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:11.200 08:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:11.200 "params": { 00:07:11.200 "name": "Nvme0", 00:07:11.200 "trtype": "tcp", 00:07:11.200 "traddr": "10.0.0.3", 00:07:11.200 "adrfam": "ipv4", 00:07:11.201 "trsvcid": "4420", 00:07:11.201 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:11.201 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:11.201 "hdgst": false, 00:07:11.201 "ddgst": false 00:07:11.201 }, 00:07:11.201 "method": "bdev_nvme_attach_controller" 00:07:11.201 }' 00:07:11.201 [2024-11-20 08:19:58.559975] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:11.201 [2024-11-20 08:19:58.560072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62302 ] 00:07:11.201 [2024-11-20 08:19:58.708757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.459 [2024-11-20 08:19:58.790117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.459 [2024-11-20 08:19:58.860126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.459 Running I/O for 1 seconds... 00:07:12.835 1472.00 IOPS, 92.00 MiB/s 00:07:12.835 Latency(us) 00:07:12.835 [2024-11-20T08:20:00.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.835 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:12.835 Verification LBA range: start 0x0 length 0x400 00:07:12.835 Nvme0n1 : 1.03 1496.72 93.55 0.00 0.00 41809.04 4110.89 44564.48 00:07:12.835 [2024-11-20T08:20:00.396Z] =================================================================================================================== 00:07:12.835 [2024-11-20T08:20:00.396Z] Total : 1496.72 93.55 0.00 0.00 41809.04 4110.89 44564.48 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:12.835 rmmod nvme_tcp 00:07:12.835 rmmod nvme_fabrics 00:07:12.835 rmmod nvme_keyring 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62221 ']' 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62221 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' -z 62221 ']' 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@961 -- # kill -0 62221 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # uname 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 62221 00:07:12.835 killing process with pid 62221 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@975 -- # echo 'killing process with pid 62221' 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # kill 62221 00:07:12.835 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@981 -- # wait 62221 00:07:13.093 [2024-11-20 08:20:00.530884] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:13.093 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:13.093 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:13.094 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:13.094 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:13.094 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:13.094 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:13.094 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:13.094 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:13.094 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:13.094 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:13.094 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:13.094 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:13.094 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:13.094 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:13.094 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:13.094 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:13.094 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:13.094 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:13.352 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:13.352 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:13.352 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:13.352 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:13.352 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:13.352 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.352 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.352 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.352 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:07:13.352 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:13.352 00:07:13.352 real 0m5.568s 00:07:13.352 user 0m19.543s 00:07:13.352 sys 0m1.525s 00:07:13.352 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:13.352 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.352 ************************************ 00:07:13.352 END TEST nvmf_host_management 00:07:13.352 ************************************ 00:07:13.352 08:20:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:13.352 08:20:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:07:13.352 08:20:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:13.352 08:20:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:13.352 ************************************ 00:07:13.352 START TEST nvmf_lvol 00:07:13.352 ************************************ 00:07:13.352 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:13.611 * Looking for test storage... 00:07:13.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:13.611 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:07:13.611 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1638 -- # lcov --version 00:07:13.611 08:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:07:13.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.611 --rc genhtml_branch_coverage=1 00:07:13.611 --rc genhtml_function_coverage=1 00:07:13.611 --rc genhtml_legend=1 00:07:13.611 --rc geninfo_all_blocks=1 00:07:13.611 --rc geninfo_unexecuted_blocks=1 00:07:13.611 00:07:13.611 ' 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:07:13.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.611 --rc genhtml_branch_coverage=1 00:07:13.611 --rc genhtml_function_coverage=1 00:07:13.611 --rc genhtml_legend=1 00:07:13.611 --rc geninfo_all_blocks=1 00:07:13.611 --rc geninfo_unexecuted_blocks=1 00:07:13.611 00:07:13.611 ' 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:07:13.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.611 --rc genhtml_branch_coverage=1 00:07:13.611 --rc genhtml_function_coverage=1 00:07:13.611 --rc genhtml_legend=1 00:07:13.611 --rc geninfo_all_blocks=1 00:07:13.611 --rc geninfo_unexecuted_blocks=1 00:07:13.611 00:07:13.611 ' 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:07:13.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.611 --rc genhtml_branch_coverage=1 00:07:13.611 --rc genhtml_function_coverage=1 00:07:13.611 --rc genhtml_legend=1 00:07:13.611 --rc geninfo_all_blocks=1 00:07:13.611 --rc geninfo_unexecuted_blocks=1 00:07:13.611 00:07:13.611 ' 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.611 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:13.612 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:13.612 Cannot find device "nvmf_init_br" 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:13.612 Cannot find device "nvmf_init_br2" 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:13.612 Cannot find device "nvmf_tgt_br" 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:13.612 Cannot find device "nvmf_tgt_br2" 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:13.612 Cannot find device "nvmf_init_br" 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:13.612 Cannot find device "nvmf_init_br2" 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:13.612 Cannot find device "nvmf_tgt_br" 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:13.612 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:13.871 Cannot find device "nvmf_tgt_br2" 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:13.871 Cannot find device "nvmf_br" 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:13.871 Cannot find device "nvmf_init_if" 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:13.871 Cannot find device "nvmf_init_if2" 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:13.871 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:13.871 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:13.871 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:13.871 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:07:13.871 00:07:13.871 --- 10.0.0.3 ping statistics --- 00:07:13.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.871 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:07:13.871 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:14.131 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:14.131 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:07:14.131 00:07:14.131 --- 10.0.0.4 ping statistics --- 00:07:14.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.131 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:14.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:14.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:07:14.131 00:07:14.131 --- 10.0.0.1 ping statistics --- 00:07:14.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.131 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:14.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:14.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:07:14.131 00:07:14.131 --- 10.0.0.2 ping statistics --- 00:07:14.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.131 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62578 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62578 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # '[' -z 62578 ']' 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@843 -- # local max_retries=100 00:07:14.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@847 -- # xtrace_disable 00:07:14.131 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:14.131 [2024-11-20 08:20:01.532710] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:14.131 [2024-11-20 08:20:01.532833] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.131 [2024-11-20 08:20:01.684256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:14.391 [2024-11-20 08:20:01.748783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:14.391 [2024-11-20 08:20:01.749014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:14.391 [2024-11-20 08:20:01.749112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:14.391 [2024-11-20 08:20:01.749178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:14.391 [2024-11-20 08:20:01.749239] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:14.391 [2024-11-20 08:20:01.750509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.391 [2024-11-20 08:20:01.750561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.391 [2024-11-20 08:20:01.750567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.391 [2024-11-20 08:20:01.805628] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.391 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:07:14.391 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@871 -- # return 0 00:07:14.391 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:14.391 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@735 -- # xtrace_disable 00:07:14.391 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:14.391 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.391 08:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:14.674 [2024-11-20 08:20:02.211294] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.933 08:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:15.192 08:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:15.192 08:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:15.451 08:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:15.451 08:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:15.710 08:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:15.969 08:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7f93f67c-c568-48b6-aaa9-3d2c042de03f 00:07:15.969 08:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7f93f67c-c568-48b6-aaa9-3d2c042de03f lvol 20 00:07:16.228 08:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e18aa0d4-8d58-4303-94ec-6562396e7362 00:07:16.228 08:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:16.487 08:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e18aa0d4-8d58-4303-94ec-6562396e7362 00:07:16.747 08:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:17.005 [2024-11-20 08:20:04.509120] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:17.005 08:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:17.264 08:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:17.264 08:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62652 00:07:17.264 08:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:18.639 08:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot e18aa0d4-8d58-4303-94ec-6562396e7362 MY_SNAPSHOT 00:07:18.639 08:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=fcea3d9a-6bbe-4fa6-a2c5-d36347ef0bbd 00:07:18.639 08:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize e18aa0d4-8d58-4303-94ec-6562396e7362 30 00:07:18.897 08:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone fcea3d9a-6bbe-4fa6-a2c5-d36347ef0bbd MY_CLONE 00:07:19.155 08:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=df36e9d5-68cc-4f12-a444-47aa365f46d0 00:07:19.155 08:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate df36e9d5-68cc-4f12-a444-47aa365f46d0 00:07:19.721 08:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62652 00:07:27.834 Initializing NVMe Controllers 00:07:27.834 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:27.834 Controller IO queue size 128, less than required. 00:07:27.834 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:27.834 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:27.834 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:27.834 Initialization complete. Launching workers. 00:07:27.834 ======================================================== 00:07:27.834 Latency(us) 00:07:27.834 Device Information : IOPS MiB/s Average min max 00:07:27.834 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9778.70 38.20 13090.20 2617.47 76144.34 00:07:27.834 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9812.20 38.33 13048.37 2396.07 77771.45 00:07:27.834 ======================================================== 00:07:27.834 Total : 19590.90 76.53 13069.25 2396.07 77771.45 00:07:27.834 00:07:27.834 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:28.092 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e18aa0d4-8d58-4303-94ec-6562396e7362 00:07:28.350 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7f93f67c-c568-48b6-aaa9-3d2c042de03f 00:07:28.610 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:28.610 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:28.610 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:28.610 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:28.610 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:28.610 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:28.610 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:28.610 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:28.610 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:28.610 rmmod nvme_tcp 00:07:28.610 rmmod nvme_fabrics 00:07:28.610 rmmod nvme_keyring 00:07:28.610 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:28.610 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:28.610 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:28.610 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62578 ']' 00:07:28.610 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62578 00:07:28.610 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' -z 62578 ']' 00:07:28.610 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@961 -- # kill -0 62578 00:07:28.610 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # uname 00:07:28.610 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:07:28.610 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 62578 00:07:28.610 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:07:28.610 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:07:28.610 killing process with pid 62578 00:07:28.610 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@975 -- # echo 'killing process with pid 62578' 00:07:28.610 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # kill 62578 00:07:28.610 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@981 -- # wait 62578 00:07:28.869 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:28.869 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:28.869 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:28.869 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:28.869 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:28.869 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:28.869 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:28.869 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:28.869 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:28.869 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:28.869 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:28.869 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:28.869 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:28.869 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:28.869 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:29.127 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:29.127 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:29.127 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:29.127 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:29.127 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:29.127 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:29.127 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:29.127 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:29.127 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.127 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.127 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.127 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:07:29.127 00:07:29.127 real 0m15.735s 00:07:29.127 user 1m4.932s 00:07:29.127 sys 0m4.182s 00:07:29.127 ************************************ 00:07:29.127 END TEST nvmf_lvol 00:07:29.127 ************************************ 00:07:29.127 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:29.127 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:29.127 08:20:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:29.127 08:20:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:07:29.127 08:20:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:29.127 08:20:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:29.127 ************************************ 00:07:29.127 START TEST nvmf_lvs_grow 00:07:29.127 ************************************ 00:07:29.127 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:29.387 * Looking for test storage... 00:07:29.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:29.387 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:07:29.387 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1638 -- # lcov --version 00:07:29.387 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:07:29.387 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:07:29.387 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.387 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.387 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.387 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.387 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.387 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.387 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.387 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.387 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.387 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:07:29.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.388 --rc genhtml_branch_coverage=1 00:07:29.388 --rc genhtml_function_coverage=1 00:07:29.388 --rc genhtml_legend=1 00:07:29.388 --rc geninfo_all_blocks=1 00:07:29.388 --rc geninfo_unexecuted_blocks=1 00:07:29.388 00:07:29.388 ' 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:07:29.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.388 --rc genhtml_branch_coverage=1 00:07:29.388 --rc genhtml_function_coverage=1 00:07:29.388 --rc genhtml_legend=1 00:07:29.388 --rc geninfo_all_blocks=1 00:07:29.388 --rc geninfo_unexecuted_blocks=1 00:07:29.388 00:07:29.388 ' 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:07:29.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.388 --rc genhtml_branch_coverage=1 00:07:29.388 --rc genhtml_function_coverage=1 00:07:29.388 --rc genhtml_legend=1 00:07:29.388 --rc geninfo_all_blocks=1 00:07:29.388 --rc geninfo_unexecuted_blocks=1 00:07:29.388 00:07:29.388 ' 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:07:29.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.388 --rc genhtml_branch_coverage=1 00:07:29.388 --rc genhtml_function_coverage=1 00:07:29.388 --rc genhtml_legend=1 00:07:29.388 --rc geninfo_all_blocks=1 00:07:29.388 --rc geninfo_unexecuted_blocks=1 00:07:29.388 00:07:29.388 ' 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.388 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.389 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:29.389 Cannot find device "nvmf_init_br" 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:29.389 Cannot find device "nvmf_init_br2" 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:29.389 Cannot find device "nvmf_tgt_br" 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:07:29.389 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:29.648 Cannot find device "nvmf_tgt_br2" 00:07:29.648 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:07:29.648 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:29.648 Cannot find device "nvmf_init_br" 00:07:29.648 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:07:29.648 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:29.648 Cannot find device "nvmf_init_br2" 00:07:29.648 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:07:29.648 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:29.648 Cannot find device "nvmf_tgt_br" 00:07:29.648 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:07:29.648 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:29.648 Cannot find device "nvmf_tgt_br2" 00:07:29.648 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:07:29.648 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:29.648 Cannot find device "nvmf_br" 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:29.648 Cannot find device "nvmf_init_if" 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:29.648 Cannot find device "nvmf_init_if2" 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:29.648 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:29.648 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:29.648 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:29.907 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:29.907 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:29.907 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:29.907 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:29.907 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:29.908 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:29.908 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:07:29.908 00:07:29.908 --- 10.0.0.3 ping statistics --- 00:07:29.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.908 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:29.908 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:29.908 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:07:29.908 00:07:29.908 --- 10.0.0.4 ping statistics --- 00:07:29.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.908 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:29.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:07:29.908 00:07:29.908 --- 10.0.0.1 ping statistics --- 00:07:29.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.908 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:29.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:07:29.908 00:07:29.908 --- 10.0.0.2 ping statistics --- 00:07:29.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.908 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63037 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63037 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # '[' -z 63037 ']' 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@843 -- # local max_retries=100 00:07:29.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@847 -- # xtrace_disable 00:07:29.908 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:29.908 [2024-11-20 08:20:17.356636] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:29.908 [2024-11-20 08:20:17.356766] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.167 [2024-11-20 08:20:17.511852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.167 [2024-11-20 08:20:17.579232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.167 [2024-11-20 08:20:17.579318] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.167 [2024-11-20 08:20:17.579333] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.167 [2024-11-20 08:20:17.579343] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.167 [2024-11-20 08:20:17.579352] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.167 [2024-11-20 08:20:17.579889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.167 [2024-11-20 08:20:17.637522] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.102 08:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:07:31.102 08:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@871 -- # return 0 00:07:31.102 08:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:31.102 08:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@735 -- # xtrace_disable 00:07:31.102 08:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:31.102 08:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.102 08:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:31.102 [2024-11-20 08:20:18.604045] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.102 08:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:31.102 08:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:07:31.102 08:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:31.102 08:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:31.102 ************************************ 00:07:31.102 START TEST lvs_grow_clean 00:07:31.103 ************************************ 00:07:31.103 08:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1132 -- # lvs_grow 00:07:31.103 08:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:31.103 08:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:31.103 08:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:31.103 08:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:31.103 08:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:31.103 08:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:31.103 08:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:31.103 08:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:31.103 08:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:31.671 08:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:31.671 08:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:31.928 08:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=71ebeba6-1322-4113-92a1-4637d7569fd6 00:07:31.928 08:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71ebeba6-1322-4113-92a1-4637d7569fd6 00:07:31.928 08:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:32.188 08:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:32.188 08:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:32.188 08:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 71ebeba6-1322-4113-92a1-4637d7569fd6 lvol 150 00:07:32.447 08:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=708c9e06-c1da-4623-8ff9-e7d6c909687f 00:07:32.447 08:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:32.447 08:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:32.706 [2024-11-20 08:20:20.215978] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:32.706 [2024-11-20 08:20:20.216090] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:32.706 true 00:07:32.706 08:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:32.706 08:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71ebeba6-1322-4113-92a1-4637d7569fd6 00:07:32.965 08:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:32.965 08:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:33.223 08:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 708c9e06-c1da-4623-8ff9-e7d6c909687f 00:07:33.482 08:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:33.741 [2024-11-20 08:20:21.264566] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:33.741 08:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:33.999 08:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63125 00:07:33.999 08:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:33.999 08:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:34.000 08:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63125 /var/tmp/bdevperf.sock 00:07:34.000 08:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # '[' -z 63125 ']' 00:07:34.000 08:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:34.000 08:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@843 -- # local max_retries=100 00:07:34.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:34.000 08:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:34.000 08:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@847 -- # xtrace_disable 00:07:34.000 08:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:34.258 [2024-11-20 08:20:21.572231] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:34.258 [2024-11-20 08:20:21.572328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63125 ] 00:07:34.258 [2024-11-20 08:20:21.717646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.258 [2024-11-20 08:20:21.780521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.518 [2024-11-20 08:20:21.836285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.518 08:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:07:34.518 08:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@871 -- # return 0 00:07:34.518 08:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:34.828 Nvme0n1 00:07:34.828 08:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:35.087 [ 00:07:35.087 { 00:07:35.087 "name": "Nvme0n1", 00:07:35.087 "aliases": [ 00:07:35.087 "708c9e06-c1da-4623-8ff9-e7d6c909687f" 00:07:35.087 ], 00:07:35.087 "product_name": "NVMe disk", 00:07:35.087 "block_size": 4096, 00:07:35.087 "num_blocks": 38912, 00:07:35.087 "uuid": "708c9e06-c1da-4623-8ff9-e7d6c909687f", 00:07:35.087 "numa_id": -1, 00:07:35.087 "assigned_rate_limits": { 00:07:35.087 "rw_ios_per_sec": 0, 00:07:35.087 "rw_mbytes_per_sec": 0, 00:07:35.087 "r_mbytes_per_sec": 0, 00:07:35.087 "w_mbytes_per_sec": 0 00:07:35.087 }, 00:07:35.087 "claimed": false, 00:07:35.087 "zoned": false, 00:07:35.087 "supported_io_types": { 00:07:35.087 "read": true, 00:07:35.087 "write": true, 00:07:35.087 "unmap": true, 00:07:35.087 "flush": true, 00:07:35.087 "reset": true, 00:07:35.087 "nvme_admin": true, 00:07:35.087 "nvme_io": true, 00:07:35.087 "nvme_io_md": false, 00:07:35.087 "write_zeroes": true, 00:07:35.087 "zcopy": false, 00:07:35.088 "get_zone_info": false, 00:07:35.088 "zone_management": false, 00:07:35.088 "zone_append": false, 00:07:35.088 "compare": true, 00:07:35.088 "compare_and_write": true, 00:07:35.088 "abort": true, 00:07:35.088 "seek_hole": false, 00:07:35.088 "seek_data": false, 00:07:35.088 "copy": true, 00:07:35.088 "nvme_iov_md": false 00:07:35.088 }, 00:07:35.088 "memory_domains": [ 00:07:35.088 { 00:07:35.088 "dma_device_id": "system", 00:07:35.088 "dma_device_type": 1 00:07:35.088 } 00:07:35.088 ], 00:07:35.088 "driver_specific": { 00:07:35.088 "nvme": [ 00:07:35.088 { 00:07:35.088 "trid": { 00:07:35.088 "trtype": "TCP", 00:07:35.088 "adrfam": "IPv4", 00:07:35.088 "traddr": "10.0.0.3", 00:07:35.088 "trsvcid": "4420", 00:07:35.088 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:35.088 }, 00:07:35.088 "ctrlr_data": { 00:07:35.088 "cntlid": 1, 00:07:35.088 "vendor_id": "0x8086", 00:07:35.088 "model_number": "SPDK bdev Controller", 00:07:35.088 "serial_number": "SPDK0", 00:07:35.088 "firmware_revision": "25.01", 00:07:35.088 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:35.088 "oacs": { 00:07:35.088 "security": 0, 00:07:35.088 "format": 0, 00:07:35.088 "firmware": 0, 00:07:35.088 "ns_manage": 0 00:07:35.088 }, 00:07:35.088 "multi_ctrlr": true, 00:07:35.088 "ana_reporting": false 00:07:35.088 }, 00:07:35.088 "vs": { 00:07:35.088 "nvme_version": "1.3" 00:07:35.088 }, 00:07:35.088 "ns_data": { 00:07:35.088 "id": 1, 00:07:35.088 "can_share": true 00:07:35.088 } 00:07:35.088 } 00:07:35.088 ], 00:07:35.088 "mp_policy": "active_passive" 00:07:35.088 } 00:07:35.088 } 00:07:35.088 ] 00:07:35.088 08:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63141 00:07:35.088 08:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:35.088 08:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:35.346 Running I/O for 10 seconds... 00:07:36.284 Latency(us) 00:07:36.284 [2024-11-20T08:20:23.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.284 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.284 Nvme0n1 : 1.00 6699.00 26.17 0.00 0.00 0.00 0.00 0.00 00:07:36.284 [2024-11-20T08:20:23.845Z] =================================================================================================================== 00:07:36.284 [2024-11-20T08:20:23.845Z] Total : 6699.00 26.17 0.00 0.00 0.00 0.00 0.00 00:07:36.284 00:07:37.219 08:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 71ebeba6-1322-4113-92a1-4637d7569fd6 00:07:37.219 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.219 Nvme0n1 : 2.00 6651.50 25.98 0.00 0.00 0.00 0.00 0.00 00:07:37.219 [2024-11-20T08:20:24.780Z] =================================================================================================================== 00:07:37.219 [2024-11-20T08:20:24.780Z] Total : 6651.50 25.98 0.00 0.00 0.00 0.00 0.00 00:07:37.219 00:07:37.477 true 00:07:37.477 08:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71ebeba6-1322-4113-92a1-4637d7569fd6 00:07:37.477 08:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:37.737 08:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:37.737 08:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:37.737 08:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63141 00:07:38.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.305 Nvme0n1 : 3.00 6720.33 26.25 0.00 0.00 0.00 0.00 0.00 00:07:38.305 [2024-11-20T08:20:25.866Z] =================================================================================================================== 00:07:38.305 [2024-11-20T08:20:25.866Z] Total : 6720.33 26.25 0.00 0.00 0.00 0.00 0.00 00:07:38.305 00:07:39.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.241 Nvme0n1 : 4.00 6659.50 26.01 0.00 0.00 0.00 0.00 0.00 00:07:39.241 [2024-11-20T08:20:26.802Z] =================================================================================================================== 00:07:39.241 [2024-11-20T08:20:26.802Z] Total : 6659.50 26.01 0.00 0.00 0.00 0.00 0.00 00:07:39.241 00:07:40.177 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.177 Nvme0n1 : 5.00 6648.40 25.97 0.00 0.00 0.00 0.00 0.00 00:07:40.177 [2024-11-20T08:20:27.738Z] =================================================================================================================== 00:07:40.177 [2024-11-20T08:20:27.738Z] Total : 6648.40 25.97 0.00 0.00 0.00 0.00 0.00 00:07:40.177 00:07:41.114 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.114 Nvme0n1 : 6.00 6641.00 25.94 0.00 0.00 0.00 0.00 0.00 00:07:41.114 [2024-11-20T08:20:28.675Z] =================================================================================================================== 00:07:41.114 [2024-11-20T08:20:28.675Z] Total : 6641.00 25.94 0.00 0.00 0.00 0.00 0.00 00:07:41.114 00:07:42.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.492 Nvme0n1 : 7.00 6617.57 25.85 0.00 0.00 0.00 0.00 0.00 00:07:42.492 [2024-11-20T08:20:30.053Z] =================================================================================================================== 00:07:42.492 [2024-11-20T08:20:30.053Z] Total : 6617.57 25.85 0.00 0.00 0.00 0.00 0.00 00:07:42.492 00:07:43.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.097 Nvme0n1 : 8.00 6631.75 25.91 0.00 0.00 0.00 0.00 0.00 00:07:43.097 [2024-11-20T08:20:30.658Z] =================================================================================================================== 00:07:43.097 [2024-11-20T08:20:30.658Z] Total : 6631.75 25.91 0.00 0.00 0.00 0.00 0.00 00:07:43.097 00:07:44.473 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.473 Nvme0n1 : 9.00 6628.67 25.89 0.00 0.00 0.00 0.00 0.00 00:07:44.473 [2024-11-20T08:20:32.034Z] =================================================================================================================== 00:07:44.473 [2024-11-20T08:20:32.034Z] Total : 6628.67 25.89 0.00 0.00 0.00 0.00 0.00 00:07:44.473 00:07:45.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.409 Nvme0n1 : 10.00 6613.50 25.83 0.00 0.00 0.00 0.00 0.00 00:07:45.409 [2024-11-20T08:20:32.970Z] =================================================================================================================== 00:07:45.409 [2024-11-20T08:20:32.970Z] Total : 6613.50 25.83 0.00 0.00 0.00 0.00 0.00 00:07:45.409 00:07:45.409 00:07:45.409 Latency(us) 00:07:45.409 [2024-11-20T08:20:32.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.409 Nvme0n1 : 10.01 6617.76 25.85 0.00 0.00 19336.07 5481.19 87222.46 00:07:45.409 [2024-11-20T08:20:32.970Z] =================================================================================================================== 00:07:45.409 [2024-11-20T08:20:32.970Z] Total : 6617.76 25.85 0.00 0.00 19336.07 5481.19 87222.46 00:07:45.409 { 00:07:45.409 "results": [ 00:07:45.409 { 00:07:45.409 "job": "Nvme0n1", 00:07:45.409 "core_mask": "0x2", 00:07:45.409 "workload": "randwrite", 00:07:45.409 "status": "finished", 00:07:45.409 "queue_depth": 128, 00:07:45.409 "io_size": 4096, 00:07:45.409 "runtime": 10.012905, 00:07:45.409 "iops": 6617.759781002616, 00:07:45.409 "mibps": 25.85062414454147, 00:07:45.409 "io_failed": 0, 00:07:45.409 "io_timeout": 0, 00:07:45.409 "avg_latency_us": 19336.07000248322, 00:07:45.409 "min_latency_us": 5481.192727272727, 00:07:45.409 "max_latency_us": 87222.45818181818 00:07:45.409 } 00:07:45.409 ], 00:07:45.409 "core_count": 1 00:07:45.409 } 00:07:45.409 08:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63125 00:07:45.409 08:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' -z 63125 ']' 00:07:45.409 08:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@961 -- # kill -0 63125 00:07:45.409 08:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # uname 00:07:45.409 08:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:07:45.409 08:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 63125 00:07:45.409 08:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:07:45.409 08:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:07:45.409 killing process with pid 63125 00:07:45.409 Received shutdown signal, test time was about 10.000000 seconds 00:07:45.409 00:07:45.409 Latency(us) 00:07:45.409 [2024-11-20T08:20:32.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.409 [2024-11-20T08:20:32.970Z] =================================================================================================================== 00:07:45.409 [2024-11-20T08:20:32.970Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:45.409 08:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@975 -- # echo 'killing process with pid 63125' 00:07:45.409 08:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # kill 63125 00:07:45.409 08:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@981 -- # wait 63125 00:07:45.409 08:20:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:45.976 08:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:45.976 08:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71ebeba6-1322-4113-92a1-4637d7569fd6 00:07:45.976 08:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:46.544 08:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:46.544 08:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:46.544 08:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:46.544 [2024-11-20 08:20:34.010347] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:46.544 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71ebeba6-1322-4113-92a1-4637d7569fd6 00:07:46.544 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # local es=0 00:07:46.544 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71ebeba6-1322-4113-92a1-4637d7569fd6 00:07:46.544 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:46.544 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:46.544 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:46.544 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:46.544 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:46.544 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:07:46.544 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:46.544 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:46.545 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71ebeba6-1322-4113-92a1-4637d7569fd6 00:07:46.804 request: 00:07:46.804 { 00:07:46.804 "uuid": "71ebeba6-1322-4113-92a1-4637d7569fd6", 00:07:46.804 "method": "bdev_lvol_get_lvstores", 00:07:46.804 "req_id": 1 00:07:46.804 } 00:07:46.804 Got JSON-RPC error response 00:07:46.804 response: 00:07:46.804 { 00:07:46.804 "code": -19, 00:07:46.804 "message": "No such device" 00:07:46.804 } 00:07:46.804 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@658 -- # es=1 00:07:46.804 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:07:46.804 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:07:46.804 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:07:46.804 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:47.062 aio_bdev 00:07:47.062 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 708c9e06-c1da-4623-8ff9-e7d6c909687f 00:07:47.062 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # local bdev_name=708c9e06-c1da-4623-8ff9-e7d6c909687f 00:07:47.062 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # local bdev_timeout= 00:07:47.062 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # local i 00:07:47.062 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # [[ -z '' ]] 00:07:47.062 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # bdev_timeout=2000 00:07:47.062 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:47.321 08:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@913 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 708c9e06-c1da-4623-8ff9-e7d6c909687f -t 2000 00:07:47.580 [ 00:07:47.580 { 00:07:47.580 "name": "708c9e06-c1da-4623-8ff9-e7d6c909687f", 00:07:47.580 "aliases": [ 00:07:47.580 "lvs/lvol" 00:07:47.580 ], 00:07:47.580 "product_name": "Logical Volume", 00:07:47.580 "block_size": 4096, 00:07:47.580 "num_blocks": 38912, 00:07:47.580 "uuid": "708c9e06-c1da-4623-8ff9-e7d6c909687f", 00:07:47.580 "assigned_rate_limits": { 00:07:47.580 "rw_ios_per_sec": 0, 00:07:47.580 "rw_mbytes_per_sec": 0, 00:07:47.580 "r_mbytes_per_sec": 0, 00:07:47.580 "w_mbytes_per_sec": 0 00:07:47.580 }, 00:07:47.580 "claimed": false, 00:07:47.580 "zoned": false, 00:07:47.580 "supported_io_types": { 00:07:47.580 "read": true, 00:07:47.580 "write": true, 00:07:47.580 "unmap": true, 00:07:47.580 "flush": false, 00:07:47.580 "reset": true, 00:07:47.580 "nvme_admin": false, 00:07:47.580 "nvme_io": false, 00:07:47.580 "nvme_io_md": false, 00:07:47.580 "write_zeroes": true, 00:07:47.580 "zcopy": false, 00:07:47.580 "get_zone_info": false, 00:07:47.580 "zone_management": false, 00:07:47.580 "zone_append": false, 00:07:47.580 "compare": false, 00:07:47.580 "compare_and_write": false, 00:07:47.580 "abort": false, 00:07:47.580 "seek_hole": true, 00:07:47.580 "seek_data": true, 00:07:47.580 "copy": false, 00:07:47.580 "nvme_iov_md": false 00:07:47.580 }, 00:07:47.580 "driver_specific": { 00:07:47.580 "lvol": { 00:07:47.580 "lvol_store_uuid": "71ebeba6-1322-4113-92a1-4637d7569fd6", 00:07:47.580 "base_bdev": "aio_bdev", 00:07:47.580 "thin_provision": false, 00:07:47.580 "num_allocated_clusters": 38, 00:07:47.580 "snapshot": false, 00:07:47.580 "clone": false, 00:07:47.580 "esnap_clone": false 00:07:47.580 } 00:07:47.580 } 00:07:47.580 } 00:07:47.580 ] 00:07:47.580 08:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@914 -- # return 0 00:07:47.580 08:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71ebeba6-1322-4113-92a1-4637d7569fd6 00:07:47.580 08:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:47.840 08:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:47.840 08:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71ebeba6-1322-4113-92a1-4637d7569fd6 00:07:47.840 08:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:48.098 08:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:48.098 08:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 708c9e06-c1da-4623-8ff9-e7d6c909687f 00:07:48.358 08:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 71ebeba6-1322-4113-92a1-4637d7569fd6 00:07:48.616 08:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:48.875 08:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:49.441 00:07:49.441 real 0m18.153s 00:07:49.441 user 0m17.144s 00:07:49.441 sys 0m2.465s 00:07:49.441 08:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1133 -- # xtrace_disable 00:07:49.441 08:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:49.441 ************************************ 00:07:49.441 END TEST lvs_grow_clean 00:07:49.441 ************************************ 00:07:49.441 08:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:49.441 08:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:07:49.441 08:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1114 -- # xtrace_disable 00:07:49.441 08:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:49.441 ************************************ 00:07:49.441 START TEST lvs_grow_dirty 00:07:49.441 ************************************ 00:07:49.441 08:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1132 -- # lvs_grow dirty 00:07:49.441 08:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:49.441 08:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:49.441 08:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:49.441 08:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:49.441 08:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:49.441 08:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:49.441 08:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:49.441 08:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:49.441 08:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:49.700 08:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:49.700 08:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:49.959 08:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6e9edf8b-fe26-4538-b8fe-a9553af304be 00:07:49.959 08:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e9edf8b-fe26-4538-b8fe-a9553af304be 00:07:49.959 08:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:50.255 08:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:50.255 08:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:50.255 08:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e9edf8b-fe26-4538-b8fe-a9553af304be lvol 150 00:07:50.822 08:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=563bb34b-9df5-4fcc-9b91-80eaad6737d6 00:07:50.822 08:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:50.822 08:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:50.822 [2024-11-20 08:20:38.326888] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:50.822 [2024-11-20 08:20:38.327024] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:50.822 true 00:07:50.822 08:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e9edf8b-fe26-4538-b8fe-a9553af304be 00:07:50.822 08:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:51.081 08:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:51.081 08:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:51.647 08:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 563bb34b-9df5-4fcc-9b91-80eaad6737d6 00:07:51.648 08:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:52.214 [2024-11-20 08:20:39.483534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:52.214 08:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:52.214 08:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63395 00:07:52.214 08:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:52.214 08:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:52.214 08:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63395 /var/tmp/bdevperf.sock 00:07:52.214 08:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # '[' -z 63395 ']' 00:07:52.214 08:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:52.214 08:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@843 -- # local max_retries=100 00:07:52.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:52.214 08:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:52.214 08:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@847 -- # xtrace_disable 00:07:52.214 08:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:52.473 [2024-11-20 08:20:39.800851] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:07:52.473 [2024-11-20 08:20:39.801009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63395 ] 00:07:52.473 [2024-11-20 08:20:39.951311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.473 [2024-11-20 08:20:40.016349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.731 [2024-11-20 08:20:40.073997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.298 08:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:07:53.298 08:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@871 -- # return 0 00:07:53.298 08:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:53.865 Nvme0n1 00:07:53.865 08:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:54.123 [ 00:07:54.123 { 00:07:54.123 "name": "Nvme0n1", 00:07:54.123 "aliases": [ 00:07:54.123 "563bb34b-9df5-4fcc-9b91-80eaad6737d6" 00:07:54.123 ], 00:07:54.123 "product_name": "NVMe disk", 00:07:54.123 "block_size": 4096, 00:07:54.123 "num_blocks": 38912, 00:07:54.123 "uuid": "563bb34b-9df5-4fcc-9b91-80eaad6737d6", 00:07:54.123 "numa_id": -1, 00:07:54.123 "assigned_rate_limits": { 00:07:54.123 "rw_ios_per_sec": 0, 00:07:54.123 "rw_mbytes_per_sec": 0, 00:07:54.123 "r_mbytes_per_sec": 0, 00:07:54.123 "w_mbytes_per_sec": 0 00:07:54.123 }, 00:07:54.123 "claimed": false, 00:07:54.123 "zoned": false, 00:07:54.123 "supported_io_types": { 00:07:54.123 "read": true, 00:07:54.123 "write": true, 00:07:54.123 "unmap": true, 00:07:54.123 "flush": true, 00:07:54.123 "reset": true, 00:07:54.123 "nvme_admin": true, 00:07:54.123 "nvme_io": true, 00:07:54.123 "nvme_io_md": false, 00:07:54.123 "write_zeroes": true, 00:07:54.123 "zcopy": false, 00:07:54.123 "get_zone_info": false, 00:07:54.123 "zone_management": false, 00:07:54.123 "zone_append": false, 00:07:54.123 "compare": true, 00:07:54.123 "compare_and_write": true, 00:07:54.123 "abort": true, 00:07:54.123 "seek_hole": false, 00:07:54.123 "seek_data": false, 00:07:54.123 "copy": true, 00:07:54.123 "nvme_iov_md": false 00:07:54.123 }, 00:07:54.123 "memory_domains": [ 00:07:54.123 { 00:07:54.123 "dma_device_id": "system", 00:07:54.123 "dma_device_type": 1 00:07:54.123 } 00:07:54.123 ], 00:07:54.123 "driver_specific": { 00:07:54.123 "nvme": [ 00:07:54.123 { 00:07:54.123 "trid": { 00:07:54.123 "trtype": "TCP", 00:07:54.123 "adrfam": "IPv4", 00:07:54.123 "traddr": "10.0.0.3", 00:07:54.123 "trsvcid": "4420", 00:07:54.123 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:54.123 }, 00:07:54.123 "ctrlr_data": { 00:07:54.123 "cntlid": 1, 00:07:54.123 "vendor_id": "0x8086", 00:07:54.123 "model_number": "SPDK bdev Controller", 00:07:54.123 "serial_number": "SPDK0", 00:07:54.123 "firmware_revision": "25.01", 00:07:54.123 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:54.123 "oacs": { 00:07:54.123 "security": 0, 00:07:54.123 "format": 0, 00:07:54.123 "firmware": 0, 00:07:54.123 "ns_manage": 0 00:07:54.123 }, 00:07:54.123 "multi_ctrlr": true, 00:07:54.123 "ana_reporting": false 00:07:54.123 }, 00:07:54.123 "vs": { 00:07:54.123 "nvme_version": "1.3" 00:07:54.123 }, 00:07:54.123 "ns_data": { 00:07:54.123 "id": 1, 00:07:54.123 "can_share": true 00:07:54.123 } 00:07:54.123 } 00:07:54.123 ], 00:07:54.123 "mp_policy": "active_passive" 00:07:54.123 } 00:07:54.123 } 00:07:54.123 ] 00:07:54.123 08:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63418 00:07:54.123 08:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:54.123 08:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:54.123 Running I/O for 10 seconds... 00:07:55.058 Latency(us) 00:07:55.058 [2024-11-20T08:20:42.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.058 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.058 Nvme0n1 : 1.00 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:07:55.058 [2024-11-20T08:20:42.619Z] =================================================================================================================== 00:07:55.058 [2024-11-20T08:20:42.619Z] Total : 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:07:55.058 00:07:55.994 08:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6e9edf8b-fe26-4538-b8fe-a9553af304be 00:07:56.252 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.252 Nvme0n1 : 2.00 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:07:56.252 [2024-11-20T08:20:43.813Z] =================================================================================================================== 00:07:56.252 [2024-11-20T08:20:43.813Z] Total : 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:07:56.252 00:07:56.252 true 00:07:56.252 08:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e9edf8b-fe26-4538-b8fe-a9553af304be 00:07:56.252 08:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:56.567 08:20:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:56.567 08:20:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:56.567 08:20:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63418 00:07:57.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.132 Nvme0n1 : 3.00 7281.33 28.44 0.00 0.00 0.00 0.00 0.00 00:07:57.132 [2024-11-20T08:20:44.693Z] =================================================================================================================== 00:07:57.132 [2024-11-20T08:20:44.693Z] Total : 7281.33 28.44 0.00 0.00 0.00 0.00 0.00 00:07:57.132 00:07:58.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.065 Nvme0n1 : 4.00 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:07:58.065 [2024-11-20T08:20:45.626Z] =================================================================================================================== 00:07:58.065 [2024-11-20T08:20:45.626Z] Total : 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:07:58.065 00:07:59.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.439 Nvme0n1 : 5.00 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:07:59.439 [2024-11-20T08:20:47.000Z] =================================================================================================================== 00:07:59.439 [2024-11-20T08:20:47.000Z] Total : 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:07:59.439 00:08:00.374 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.374 Nvme0n1 : 6.00 6991.33 27.31 0.00 0.00 0.00 0.00 0.00 00:08:00.374 [2024-11-20T08:20:47.935Z] =================================================================================================================== 00:08:00.374 [2024-11-20T08:20:47.935Z] Total : 6991.33 27.31 0.00 0.00 0.00 0.00 0.00 00:08:00.374 00:08:01.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.308 Nvme0n1 : 7.00 6972.29 27.24 0.00 0.00 0.00 0.00 0.00 00:08:01.308 [2024-11-20T08:20:48.869Z] =================================================================================================================== 00:08:01.308 [2024-11-20T08:20:48.869Z] Total : 6972.29 27.24 0.00 0.00 0.00 0.00 0.00 00:08:01.308 00:08:02.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.293 Nvme0n1 : 8.00 6942.12 27.12 0.00 0.00 0.00 0.00 0.00 00:08:02.293 [2024-11-20T08:20:49.854Z] =================================================================================================================== 00:08:02.293 [2024-11-20T08:20:49.854Z] Total : 6942.12 27.12 0.00 0.00 0.00 0.00 0.00 00:08:02.293 00:08:03.226 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.226 Nvme0n1 : 9.00 6961.00 27.19 0.00 0.00 0.00 0.00 0.00 00:08:03.226 [2024-11-20T08:20:50.787Z] =================================================================================================================== 00:08:03.226 [2024-11-20T08:20:50.787Z] Total : 6961.00 27.19 0.00 0.00 0.00 0.00 0.00 00:08:03.226 00:08:04.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.156 Nvme0n1 : 10.00 6963.40 27.20 0.00 0.00 0.00 0.00 0.00 00:08:04.156 [2024-11-20T08:20:51.717Z] =================================================================================================================== 00:08:04.156 [2024-11-20T08:20:51.717Z] Total : 6963.40 27.20 0.00 0.00 0.00 0.00 0.00 00:08:04.156 00:08:04.156 00:08:04.156 Latency(us) 00:08:04.156 [2024-11-20T08:20:51.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.156 Nvme0n1 : 10.01 6969.57 27.22 0.00 0.00 18359.60 12868.89 223060.71 00:08:04.156 [2024-11-20T08:20:51.717Z] =================================================================================================================== 00:08:04.156 [2024-11-20T08:20:51.717Z] Total : 6969.57 27.22 0.00 0.00 18359.60 12868.89 223060.71 00:08:04.156 { 00:08:04.156 "results": [ 00:08:04.156 { 00:08:04.156 "job": "Nvme0n1", 00:08:04.156 "core_mask": "0x2", 00:08:04.156 "workload": "randwrite", 00:08:04.156 "status": "finished", 00:08:04.156 "queue_depth": 128, 00:08:04.156 "io_size": 4096, 00:08:04.156 "runtime": 10.009507, 00:08:04.156 "iops": 6969.574025973507, 00:08:04.156 "mibps": 27.22489853895901, 00:08:04.156 "io_failed": 0, 00:08:04.156 "io_timeout": 0, 00:08:04.156 "avg_latency_us": 18359.600621124813, 00:08:04.156 "min_latency_us": 12868.887272727272, 00:08:04.156 "max_latency_us": 223060.71272727274 00:08:04.156 } 00:08:04.156 ], 00:08:04.156 "core_count": 1 00:08:04.156 } 00:08:04.156 08:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63395 00:08:04.156 08:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' -z 63395 ']' 00:08:04.156 08:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@961 -- # kill -0 63395 00:08:04.156 08:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # uname 00:08:04.156 08:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:08:04.156 08:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 63395 00:08:04.156 08:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:08:04.156 08:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:08:04.156 killing process with pid 63395 00:08:04.156 08:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@975 -- # echo 'killing process with pid 63395' 00:08:04.156 Received shutdown signal, test time was about 10.000000 seconds 00:08:04.156 00:08:04.156 Latency(us) 00:08:04.156 [2024-11-20T08:20:51.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.156 [2024-11-20T08:20:51.717Z] =================================================================================================================== 00:08:04.156 [2024-11-20T08:20:51.717Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:04.156 08:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # kill 63395 00:08:04.156 08:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@981 -- # wait 63395 00:08:04.415 08:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:04.673 08:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:04.931 08:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e9edf8b-fe26-4538-b8fe-a9553af304be 00:08:04.931 08:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:05.497 08:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:05.497 08:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:05.497 08:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63037 00:08:05.497 08:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63037 00:08:05.497 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63037 Killed "${NVMF_APP[@]}" "$@" 00:08:05.497 08:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:05.497 08:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:05.497 08:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:05.497 08:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:05.497 08:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:05.497 08:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63557 00:08:05.497 08:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:05.497 08:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63557 00:08:05.497 08:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # '[' -z 63557 ']' 00:08:05.497 08:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.497 08:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@843 -- # local max_retries=100 00:08:05.497 08:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.497 08:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@847 -- # xtrace_disable 00:08:05.497 08:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:05.497 [2024-11-20 08:20:52.839890] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:05.497 [2024-11-20 08:20:52.839972] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.497 [2024-11-20 08:20:52.984057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.497 [2024-11-20 08:20:53.042268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.497 [2024-11-20 08:20:53.042334] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.497 [2024-11-20 08:20:53.042347] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:05.497 [2024-11-20 08:20:53.042355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:05.497 [2024-11-20 08:20:53.042363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.497 [2024-11-20 08:20:53.042744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.755 [2024-11-20 08:20:53.096439] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.755 08:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:08:05.755 08:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@871 -- # return 0 00:08:05.755 08:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:05.755 08:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@735 -- # xtrace_disable 00:08:05.755 08:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:05.755 08:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.755 08:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:06.013 [2024-11-20 08:20:53.450922] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:06.013 [2024-11-20 08:20:53.451986] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:06.013 [2024-11-20 08:20:53.452313] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:06.013 08:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:06.013 08:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 563bb34b-9df5-4fcc-9b91-80eaad6737d6 00:08:06.013 08:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # local bdev_name=563bb34b-9df5-4fcc-9b91-80eaad6737d6 00:08:06.013 08:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # local bdev_timeout= 00:08:06.013 08:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # local i 00:08:06.013 08:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # [[ -z '' ]] 00:08:06.013 08:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # bdev_timeout=2000 00:08:06.013 08:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:06.271 08:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@913 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 563bb34b-9df5-4fcc-9b91-80eaad6737d6 -t 2000 00:08:06.529 [ 00:08:06.529 { 00:08:06.529 "name": "563bb34b-9df5-4fcc-9b91-80eaad6737d6", 00:08:06.529 "aliases": [ 00:08:06.529 "lvs/lvol" 00:08:06.529 ], 00:08:06.529 "product_name": "Logical Volume", 00:08:06.529 "block_size": 4096, 00:08:06.529 "num_blocks": 38912, 00:08:06.529 "uuid": "563bb34b-9df5-4fcc-9b91-80eaad6737d6", 00:08:06.529 "assigned_rate_limits": { 00:08:06.529 "rw_ios_per_sec": 0, 00:08:06.529 "rw_mbytes_per_sec": 0, 00:08:06.529 "r_mbytes_per_sec": 0, 00:08:06.529 "w_mbytes_per_sec": 0 00:08:06.529 }, 00:08:06.529 "claimed": false, 00:08:06.529 "zoned": false, 00:08:06.529 "supported_io_types": { 00:08:06.529 "read": true, 00:08:06.529 "write": true, 00:08:06.529 "unmap": true, 00:08:06.529 "flush": false, 00:08:06.529 "reset": true, 00:08:06.529 "nvme_admin": false, 00:08:06.529 "nvme_io": false, 00:08:06.529 "nvme_io_md": false, 00:08:06.529 "write_zeroes": true, 00:08:06.529 "zcopy": false, 00:08:06.529 "get_zone_info": false, 00:08:06.529 "zone_management": false, 00:08:06.529 "zone_append": false, 00:08:06.529 "compare": false, 00:08:06.529 "compare_and_write": false, 00:08:06.529 "abort": false, 00:08:06.529 "seek_hole": true, 00:08:06.529 "seek_data": true, 00:08:06.529 "copy": false, 00:08:06.529 "nvme_iov_md": false 00:08:06.529 }, 00:08:06.529 "driver_specific": { 00:08:06.529 "lvol": { 00:08:06.529 "lvol_store_uuid": "6e9edf8b-fe26-4538-b8fe-a9553af304be", 00:08:06.529 "base_bdev": "aio_bdev", 00:08:06.529 "thin_provision": false, 00:08:06.529 "num_allocated_clusters": 38, 00:08:06.529 "snapshot": false, 00:08:06.529 "clone": false, 00:08:06.529 "esnap_clone": false 00:08:06.529 } 00:08:06.529 } 00:08:06.529 } 00:08:06.529 ] 00:08:06.529 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@914 -- # return 0 00:08:06.529 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e9edf8b-fe26-4538-b8fe-a9553af304be 00:08:06.529 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:06.788 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:06.788 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e9edf8b-fe26-4538-b8fe-a9553af304be 00:08:06.788 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:07.046 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:07.046 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:07.304 [2024-11-20 08:20:54.856468] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:07.563 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e9edf8b-fe26-4538-b8fe-a9553af304be 00:08:07.563 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # local es=0 00:08:07.563 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@657 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e9edf8b-fe26-4538-b8fe-a9553af304be 00:08:07.563 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:07.563 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:08:07.563 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@647 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:07.563 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:08:07.563 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:07.563 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:08:07.563 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:07.563 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:07.563 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@658 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e9edf8b-fe26-4538-b8fe-a9553af304be 00:08:07.822 request: 00:08:07.822 { 00:08:07.822 "uuid": "6e9edf8b-fe26-4538-b8fe-a9553af304be", 00:08:07.822 "method": "bdev_lvol_get_lvstores", 00:08:07.822 "req_id": 1 00:08:07.822 } 00:08:07.822 Got JSON-RPC error response 00:08:07.822 response: 00:08:07.822 { 00:08:07.822 "code": -19, 00:08:07.822 "message": "No such device" 00:08:07.822 } 00:08:07.822 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@658 -- # es=1 00:08:07.822 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:08:07.822 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:08:07.822 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:08:07.822 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:08.081 aio_bdev 00:08:08.081 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 563bb34b-9df5-4fcc-9b91-80eaad6737d6 00:08:08.081 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # local bdev_name=563bb34b-9df5-4fcc-9b91-80eaad6737d6 00:08:08.081 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # local bdev_timeout= 00:08:08.081 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # local i 00:08:08.081 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # [[ -z '' ]] 00:08:08.081 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # bdev_timeout=2000 00:08:08.081 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:08.339 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@913 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 563bb34b-9df5-4fcc-9b91-80eaad6737d6 -t 2000 00:08:08.598 [ 00:08:08.598 { 00:08:08.598 "name": "563bb34b-9df5-4fcc-9b91-80eaad6737d6", 00:08:08.598 "aliases": [ 00:08:08.598 "lvs/lvol" 00:08:08.598 ], 00:08:08.598 "product_name": "Logical Volume", 00:08:08.598 "block_size": 4096, 00:08:08.598 "num_blocks": 38912, 00:08:08.598 "uuid": "563bb34b-9df5-4fcc-9b91-80eaad6737d6", 00:08:08.598 "assigned_rate_limits": { 00:08:08.598 "rw_ios_per_sec": 0, 00:08:08.598 "rw_mbytes_per_sec": 0, 00:08:08.598 "r_mbytes_per_sec": 0, 00:08:08.598 "w_mbytes_per_sec": 0 00:08:08.598 }, 00:08:08.598 "claimed": false, 00:08:08.598 "zoned": false, 00:08:08.598 "supported_io_types": { 00:08:08.598 "read": true, 00:08:08.598 "write": true, 00:08:08.598 "unmap": true, 00:08:08.598 "flush": false, 00:08:08.598 "reset": true, 00:08:08.598 "nvme_admin": false, 00:08:08.598 "nvme_io": false, 00:08:08.598 "nvme_io_md": false, 00:08:08.598 "write_zeroes": true, 00:08:08.598 "zcopy": false, 00:08:08.598 "get_zone_info": false, 00:08:08.598 "zone_management": false, 00:08:08.598 "zone_append": false, 00:08:08.598 "compare": false, 00:08:08.598 "compare_and_write": false, 00:08:08.598 "abort": false, 00:08:08.598 "seek_hole": true, 00:08:08.598 "seek_data": true, 00:08:08.598 "copy": false, 00:08:08.598 "nvme_iov_md": false 00:08:08.598 }, 00:08:08.598 "driver_specific": { 00:08:08.598 "lvol": { 00:08:08.598 "lvol_store_uuid": "6e9edf8b-fe26-4538-b8fe-a9553af304be", 00:08:08.598 "base_bdev": "aio_bdev", 00:08:08.598 "thin_provision": false, 00:08:08.598 "num_allocated_clusters": 38, 00:08:08.598 "snapshot": false, 00:08:08.598 "clone": false, 00:08:08.598 "esnap_clone": false 00:08:08.598 } 00:08:08.598 } 00:08:08.598 } 00:08:08.598 ] 00:08:08.598 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@914 -- # return 0 00:08:08.598 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e9edf8b-fe26-4538-b8fe-a9553af304be 00:08:08.598 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:08.857 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:08.857 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e9edf8b-fe26-4538-b8fe-a9553af304be 00:08:08.857 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:09.116 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:09.116 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 563bb34b-9df5-4fcc-9b91-80eaad6737d6 00:08:09.374 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6e9edf8b-fe26-4538-b8fe-a9553af304be 00:08:09.632 08:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:09.890 08:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:10.458 ************************************ 00:08:10.458 END TEST lvs_grow_dirty 00:08:10.458 ************************************ 00:08:10.458 00:08:10.458 real 0m20.880s 00:08:10.458 user 0m43.985s 00:08:10.458 sys 0m8.525s 00:08:10.458 08:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1133 -- # xtrace_disable 00:08:10.458 08:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:10.458 08:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:10.458 08:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # type=--id 00:08:10.458 08:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # id=0 00:08:10.458 08:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # '[' --id = --pid ']' 00:08:10.458 08:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:10.458 08:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # shm_files=nvmf_trace.0 00:08:10.458 08:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # [[ -z nvmf_trace.0 ]] 00:08:10.458 08:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # for n in $shm_files 00:08:10.458 08:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@828 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:10.458 nvmf_trace.0 00:08:10.458 08:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # return 0 00:08:10.458 08:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:10.458 08:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:10.458 08:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:10.716 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:10.716 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:10.716 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:10.716 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:10.716 rmmod nvme_tcp 00:08:10.716 rmmod nvme_fabrics 00:08:10.716 rmmod nvme_keyring 00:08:10.716 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:10.716 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:10.716 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:10.716 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63557 ']' 00:08:10.716 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63557 00:08:10.716 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' -z 63557 ']' 00:08:10.716 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@961 -- # kill -0 63557 00:08:10.716 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # uname 00:08:10.716 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:08:10.716 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 63557 00:08:10.716 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:08:10.716 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:08:10.716 killing process with pid 63557 00:08:10.716 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@975 -- # echo 'killing process with pid 63557' 00:08:10.716 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # kill 63557 00:08:10.716 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@981 -- # wait 63557 00:08:10.975 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:10.975 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:10.975 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:10.975 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:10.975 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:10.976 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:10.976 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:10.976 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:10.976 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:10.976 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:10.976 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:10.976 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:10.976 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:10.976 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:10.976 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:10.976 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:10.976 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:10.976 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:10.976 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:10.976 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:10.976 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:10.976 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:10.976 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:10.976 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.976 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.976 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.234 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:08:11.234 00:08:11.234 real 0m41.892s 00:08:11.234 user 1m7.355s 00:08:11.234 sys 0m12.000s 00:08:11.234 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1133 -- # xtrace_disable 00:08:11.234 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:11.234 ************************************ 00:08:11.234 END TEST nvmf_lvs_grow 00:08:11.234 ************************************ 00:08:11.234 08:20:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:11.234 08:20:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:08:11.234 08:20:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1114 -- # xtrace_disable 00:08:11.234 08:20:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:11.234 ************************************ 00:08:11.234 START TEST nvmf_bdev_io_wait 00:08:11.234 ************************************ 00:08:11.234 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:11.234 * Looking for test storage... 00:08:11.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.234 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:08:11.234 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1638 -- # lcov --version 00:08:11.234 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:08:11.493 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:08:11.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.494 --rc genhtml_branch_coverage=1 00:08:11.494 --rc genhtml_function_coverage=1 00:08:11.494 --rc genhtml_legend=1 00:08:11.494 --rc geninfo_all_blocks=1 00:08:11.494 --rc geninfo_unexecuted_blocks=1 00:08:11.494 00:08:11.494 ' 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:08:11.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.494 --rc genhtml_branch_coverage=1 00:08:11.494 --rc genhtml_function_coverage=1 00:08:11.494 --rc genhtml_legend=1 00:08:11.494 --rc geninfo_all_blocks=1 00:08:11.494 --rc geninfo_unexecuted_blocks=1 00:08:11.494 00:08:11.494 ' 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:08:11.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.494 --rc genhtml_branch_coverage=1 00:08:11.494 --rc genhtml_function_coverage=1 00:08:11.494 --rc genhtml_legend=1 00:08:11.494 --rc geninfo_all_blocks=1 00:08:11.494 --rc geninfo_unexecuted_blocks=1 00:08:11.494 00:08:11.494 ' 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:08:11.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.494 --rc genhtml_branch_coverage=1 00:08:11.494 --rc genhtml_function_coverage=1 00:08:11.494 --rc genhtml_legend=1 00:08:11.494 --rc geninfo_all_blocks=1 00:08:11.494 --rc geninfo_unexecuted_blocks=1 00:08:11.494 00:08:11.494 ' 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.494 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:11.494 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:11.495 Cannot find device "nvmf_init_br" 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:11.495 Cannot find device "nvmf_init_br2" 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:11.495 Cannot find device "nvmf_tgt_br" 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:11.495 Cannot find device "nvmf_tgt_br2" 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:11.495 Cannot find device "nvmf_init_br" 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:11.495 Cannot find device "nvmf_init_br2" 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:11.495 Cannot find device "nvmf_tgt_br" 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:11.495 Cannot find device "nvmf_tgt_br2" 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:11.495 Cannot find device "nvmf_br" 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:11.495 Cannot find device "nvmf_init_if" 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:11.495 Cannot find device "nvmf_init_if2" 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:11.495 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:11.495 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:11.495 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:11.495 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:11.495 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:11.495 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:11.495 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:11.495 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:11.755 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:11.755 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:08:11.755 00:08:11.755 --- 10.0.0.3 ping statistics --- 00:08:11.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.755 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:11.755 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:11.755 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:08:11.755 00:08:11.755 --- 10.0.0.4 ping statistics --- 00:08:11.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.755 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:11.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:08:11.755 00:08:11.755 --- 10.0.0.1 ping statistics --- 00:08:11.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.755 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:11.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:08:11.755 00:08:11.755 --- 10.0.0.2 ping statistics --- 00:08:11.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.755 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=63938 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 63938 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # '[' -z 63938 ']' 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@843 -- # local max_retries=100 00:08:11.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@847 -- # xtrace_disable 00:08:11.755 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:11.755 [2024-11-20 08:20:59.289592] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:11.755 [2024-11-20 08:20:59.289662] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.015 [2024-11-20 08:20:59.436304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.015 [2024-11-20 08:20:59.490562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.015 [2024-11-20 08:20:59.490631] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.015 [2024-11-20 08:20:59.490642] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.015 [2024-11-20 08:20:59.490651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.015 [2024-11-20 08:20:59.490658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.015 [2024-11-20 08:20:59.491792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.015 [2024-11-20 08:20:59.491873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.016 [2024-11-20 08:20:59.491945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.016 [2024-11-20 08:20:59.491946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.016 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:08:12.016 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@871 -- # return 0 00:08:12.016 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:12.016 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@735 -- # xtrace_disable 00:08:12.016 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.276 [2024-11-20 08:20:59.668446] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.276 [2024-11-20 08:20:59.680795] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.276 Malloc0 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.276 [2024-11-20 08:20:59.736104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=63960 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=63962 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:12.276 { 00:08:12.276 "params": { 00:08:12.276 "name": "Nvme$subsystem", 00:08:12.276 "trtype": "$TEST_TRANSPORT", 00:08:12.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:12.276 "adrfam": "ipv4", 00:08:12.276 "trsvcid": "$NVMF_PORT", 00:08:12.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:12.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:12.276 "hdgst": ${hdgst:-false}, 00:08:12.276 "ddgst": ${ddgst:-false} 00:08:12.276 }, 00:08:12.276 "method": "bdev_nvme_attach_controller" 00:08:12.276 } 00:08:12.276 EOF 00:08:12.276 )") 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=63964 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:12.276 { 00:08:12.276 "params": { 00:08:12.276 "name": "Nvme$subsystem", 00:08:12.276 "trtype": "$TEST_TRANSPORT", 00:08:12.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:12.276 "adrfam": "ipv4", 00:08:12.276 "trsvcid": "$NVMF_PORT", 00:08:12.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:12.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:12.276 "hdgst": ${hdgst:-false}, 00:08:12.276 "ddgst": ${ddgst:-false} 00:08:12.276 }, 00:08:12.276 "method": "bdev_nvme_attach_controller" 00:08:12.276 } 00:08:12.276 EOF 00:08:12.276 )") 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=63967 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:12.276 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:12.276 { 00:08:12.276 "params": { 00:08:12.276 "name": "Nvme$subsystem", 00:08:12.276 "trtype": "$TEST_TRANSPORT", 00:08:12.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:12.276 "adrfam": "ipv4", 00:08:12.276 "trsvcid": "$NVMF_PORT", 00:08:12.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:12.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:12.276 "hdgst": ${hdgst:-false}, 00:08:12.276 "ddgst": ${ddgst:-false} 00:08:12.276 }, 00:08:12.276 "method": "bdev_nvme_attach_controller" 00:08:12.277 } 00:08:12.277 EOF 00:08:12.277 )") 00:08:12.277 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:12.277 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:12.277 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:12.277 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:12.277 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:12.277 "params": { 00:08:12.277 "name": "Nvme1", 00:08:12.277 "trtype": "tcp", 00:08:12.277 "traddr": "10.0.0.3", 00:08:12.277 "adrfam": "ipv4", 00:08:12.277 "trsvcid": "4420", 00:08:12.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:12.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:12.277 "hdgst": false, 00:08:12.277 "ddgst": false 00:08:12.277 }, 00:08:12.277 "method": "bdev_nvme_attach_controller" 00:08:12.277 }' 00:08:12.277 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:12.277 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:12.277 "params": { 00:08:12.277 "name": "Nvme1", 00:08:12.277 "trtype": "tcp", 00:08:12.277 "traddr": "10.0.0.3", 00:08:12.277 "adrfam": "ipv4", 00:08:12.277 "trsvcid": "4420", 00:08:12.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:12.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:12.277 "hdgst": false, 00:08:12.277 "ddgst": false 00:08:12.277 }, 00:08:12.277 "method": "bdev_nvme_attach_controller" 00:08:12.277 }' 00:08:12.277 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:12.277 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:12.277 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:12.277 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:12.277 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:12.277 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:12.277 { 00:08:12.277 "params": { 00:08:12.277 "name": "Nvme$subsystem", 00:08:12.277 "trtype": "$TEST_TRANSPORT", 00:08:12.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:12.277 "adrfam": "ipv4", 00:08:12.277 "trsvcid": "$NVMF_PORT", 00:08:12.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:12.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:12.277 "hdgst": ${hdgst:-false}, 00:08:12.277 "ddgst": ${ddgst:-false} 00:08:12.277 }, 00:08:12.277 "method": "bdev_nvme_attach_controller" 00:08:12.277 } 00:08:12.277 EOF 00:08:12.277 )") 00:08:12.277 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:12.277 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:12.277 "params": { 00:08:12.277 "name": "Nvme1", 00:08:12.277 "trtype": "tcp", 00:08:12.277 "traddr": "10.0.0.3", 00:08:12.277 "adrfam": "ipv4", 00:08:12.277 "trsvcid": "4420", 00:08:12.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:12.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:12.277 "hdgst": false, 00:08:12.277 "ddgst": false 00:08:12.277 }, 00:08:12.277 "method": "bdev_nvme_attach_controller" 00:08:12.277 }' 00:08:12.277 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:12.277 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:12.277 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:12.277 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:12.277 "params": { 00:08:12.277 "name": "Nvme1", 00:08:12.277 "trtype": "tcp", 00:08:12.277 "traddr": "10.0.0.3", 00:08:12.277 "adrfam": "ipv4", 00:08:12.277 "trsvcid": "4420", 00:08:12.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:12.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:12.277 "hdgst": false, 00:08:12.277 "ddgst": false 00:08:12.277 }, 00:08:12.277 "method": "bdev_nvme_attach_controller" 00:08:12.277 }' 00:08:12.277 [2024-11-20 08:20:59.804771] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:12.277 [2024-11-20 08:20:59.804879] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:12.277 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 63960 00:08:12.277 [2024-11-20 08:20:59.819388] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:12.277 [2024-11-20 08:20:59.819465] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:12.277 [2024-11-20 08:20:59.820088] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:12.277 [2024-11-20 08:20:59.820162] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:12.558 [2024-11-20 08:20:59.835422] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:12.558 [2024-11-20 08:20:59.835525] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:12.558 [2024-11-20 08:21:00.027100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.558 [2024-11-20 08:21:00.080027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:12.558 [2024-11-20 08:21:00.093986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.558 [2024-11-20 08:21:00.098697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.817 [2024-11-20 08:21:00.151501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:12.817 [2024-11-20 08:21:00.162015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.817 [2024-11-20 08:21:00.165464] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.817 [2024-11-20 08:21:00.210896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:12.817 [2024-11-20 08:21:00.223550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.817 Running I/O for 1 seconds... 00:08:12.817 [2024-11-20 08:21:00.251354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.817 Running I/O for 1 seconds... 00:08:12.817 [2024-11-20 08:21:00.308408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:12.817 [2024-11-20 08:21:00.322201] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.817 Running I/O for 1 seconds... 00:08:13.076 Running I/O for 1 seconds... 00:08:14.011 171560.00 IOPS, 670.16 MiB/s 00:08:14.011 Latency(us) 00:08:14.011 [2024-11-20T08:21:01.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.011 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:14.011 Nvme1n1 : 1.00 171225.38 668.85 0.00 0.00 743.71 370.50 1966.08 00:08:14.011 [2024-11-20T08:21:01.572Z] =================================================================================================================== 00:08:14.011 [2024-11-20T08:21:01.572Z] Total : 171225.38 668.85 0.00 0.00 743.71 370.50 1966.08 00:08:14.011 9937.00 IOPS, 38.82 MiB/s 00:08:14.011 Latency(us) 00:08:14.011 [2024-11-20T08:21:01.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.011 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:14.011 Nvme1n1 : 1.01 9976.77 38.97 0.00 0.00 12769.02 7685.59 19422.49 00:08:14.011 [2024-11-20T08:21:01.572Z] =================================================================================================================== 00:08:14.011 [2024-11-20T08:21:01.572Z] Total : 9976.77 38.97 0.00 0.00 12769.02 7685.59 19422.49 00:08:14.011 7968.00 IOPS, 31.12 MiB/s 00:08:14.011 Latency(us) 00:08:14.011 [2024-11-20T08:21:01.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.011 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:14.011 Nvme1n1 : 1.01 8028.75 31.36 0.00 0.00 15860.98 6583.39 26095.24 00:08:14.011 [2024-11-20T08:21:01.572Z] =================================================================================================================== 00:08:14.011 [2024-11-20T08:21:01.572Z] Total : 8028.75 31.36 0.00 0.00 15860.98 6583.39 26095.24 00:08:14.011 8965.00 IOPS, 35.02 MiB/s 00:08:14.011 Latency(us) 00:08:14.011 [2024-11-20T08:21:01.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.011 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:14.011 Nvme1n1 : 1.01 9040.95 35.32 0.00 0.00 14098.20 6583.39 22758.87 00:08:14.011 [2024-11-20T08:21:01.572Z] =================================================================================================================== 00:08:14.011 [2024-11-20T08:21:01.572Z] Total : 9040.95 35.32 0.00 0.00 14098.20 6583.39 22758.87 00:08:14.011 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 63962 00:08:14.011 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 63964 00:08:14.011 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 63967 00:08:14.269 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:14.269 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:14.269 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.269 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:14.269 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:14.269 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:14.269 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:14.269 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:14.269 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:14.269 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:14.269 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:14.269 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:14.269 rmmod nvme_tcp 00:08:14.269 rmmod nvme_fabrics 00:08:14.269 rmmod nvme_keyring 00:08:14.269 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:14.269 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:14.269 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:14.269 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 63938 ']' 00:08:14.269 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 63938 00:08:14.270 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' -z 63938 ']' 00:08:14.270 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@961 -- # kill -0 63938 00:08:14.270 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # uname 00:08:14.270 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:08:14.270 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 63938 00:08:14.270 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:08:14.270 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:08:14.270 killing process with pid 63938 00:08:14.270 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@975 -- # echo 'killing process with pid 63938' 00:08:14.270 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # kill 63938 00:08:14.270 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@981 -- # wait 63938 00:08:14.579 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:14.579 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:14.579 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:14.579 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:14.579 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:14.579 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:14.579 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:14.579 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:14.579 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:14.579 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:14.579 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:14.579 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:14.579 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:14.579 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:14.579 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:14.579 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:14.579 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:14.579 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:14.579 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:14.579 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:14.579 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:14.838 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:14.838 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:14.838 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.838 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.838 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.838 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:14.838 00:08:14.838 real 0m3.613s 00:08:14.838 user 0m14.128s 00:08:14.838 sys 0m2.290s 00:08:14.838 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1133 -- # xtrace_disable 00:08:14.838 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.838 ************************************ 00:08:14.838 END TEST nvmf_bdev_io_wait 00:08:14.838 ************************************ 00:08:14.838 08:21:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:14.838 08:21:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:08:14.838 08:21:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1114 -- # xtrace_disable 00:08:14.838 08:21:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:14.838 ************************************ 00:08:14.838 START TEST nvmf_queue_depth 00:08:14.838 ************************************ 00:08:14.838 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:14.838 * Looking for test storage... 00:08:14.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.838 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:08:14.838 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1638 -- # lcov --version 00:08:14.838 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:08:15.098 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:08:15.098 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.098 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.098 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.098 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.098 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.098 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.098 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.098 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.098 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.098 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.098 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.098 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:15.098 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:15.098 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.098 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.098 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:15.098 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:15.098 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.098 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:15.098 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.098 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:15.098 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:08:15.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.099 --rc genhtml_branch_coverage=1 00:08:15.099 --rc genhtml_function_coverage=1 00:08:15.099 --rc genhtml_legend=1 00:08:15.099 --rc geninfo_all_blocks=1 00:08:15.099 --rc geninfo_unexecuted_blocks=1 00:08:15.099 00:08:15.099 ' 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:08:15.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.099 --rc genhtml_branch_coverage=1 00:08:15.099 --rc genhtml_function_coverage=1 00:08:15.099 --rc genhtml_legend=1 00:08:15.099 --rc geninfo_all_blocks=1 00:08:15.099 --rc geninfo_unexecuted_blocks=1 00:08:15.099 00:08:15.099 ' 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:08:15.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.099 --rc genhtml_branch_coverage=1 00:08:15.099 --rc genhtml_function_coverage=1 00:08:15.099 --rc genhtml_legend=1 00:08:15.099 --rc geninfo_all_blocks=1 00:08:15.099 --rc geninfo_unexecuted_blocks=1 00:08:15.099 00:08:15.099 ' 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:08:15.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.099 --rc genhtml_branch_coverage=1 00:08:15.099 --rc genhtml_function_coverage=1 00:08:15.099 --rc genhtml_legend=1 00:08:15.099 --rc geninfo_all_blocks=1 00:08:15.099 --rc geninfo_unexecuted_blocks=1 00:08:15.099 00:08:15.099 ' 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:15.099 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:15.099 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:15.100 Cannot find device "nvmf_init_br" 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:15.100 Cannot find device "nvmf_init_br2" 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:15.100 Cannot find device "nvmf_tgt_br" 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:15.100 Cannot find device "nvmf_tgt_br2" 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:15.100 Cannot find device "nvmf_init_br" 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:15.100 Cannot find device "nvmf_init_br2" 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:15.100 Cannot find device "nvmf_tgt_br" 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:15.100 Cannot find device "nvmf_tgt_br2" 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:15.100 Cannot find device "nvmf_br" 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:15.100 Cannot find device "nvmf_init_if" 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:15.100 Cannot find device "nvmf_init_if2" 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:15.100 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:15.100 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:15.100 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:15.359 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:15.359 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:15.359 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:08:15.359 00:08:15.359 --- 10.0.0.3 ping statistics --- 00:08:15.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.360 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:15.360 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:15.360 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:08:15.360 00:08:15.360 --- 10.0.0.4 ping statistics --- 00:08:15.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.360 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:15.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:15.360 00:08:15.360 --- 10.0.0.1 ping statistics --- 00:08:15.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.360 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:15.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:08:15.360 00:08:15.360 --- 10.0.0.2 ping statistics --- 00:08:15.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.360 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64233 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64233 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # '[' -z 64233 ']' 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@843 -- # local max_retries=100 00:08:15.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@847 -- # xtrace_disable 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:15.360 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:15.619 [2024-11-20 08:21:02.937319] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:15.619 [2024-11-20 08:21:02.937392] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.619 [2024-11-20 08:21:03.086975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.619 [2024-11-20 08:21:03.140298] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.619 [2024-11-20 08:21:03.140377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.619 [2024-11-20 08:21:03.140388] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.619 [2024-11-20 08:21:03.140396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.619 [2024-11-20 08:21:03.140403] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.619 [2024-11-20 08:21:03.140831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.879 [2024-11-20 08:21:03.198320] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@871 -- # return 0 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@735 -- # xtrace_disable 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:15.879 [2024-11-20 08:21:03.315066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:15.879 Malloc0 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:15.879 [2024-11-20 08:21:03.367788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64258 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64258 /var/tmp/bdevperf.sock 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # '[' -z 64258 ']' 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@843 -- # local max_retries=100 00:08:15.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@847 -- # xtrace_disable 00:08:15.879 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:15.879 [2024-11-20 08:21:03.432497] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:15.879 [2024-11-20 08:21:03.432594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64258 ] 00:08:16.141 [2024-11-20 08:21:03.586845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.141 [2024-11-20 08:21:03.649197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.400 [2024-11-20 08:21:03.707970] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.400 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:08:16.400 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@871 -- # return 0 00:08:16.400 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:16.400 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:16.400 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.400 NVMe0n1 00:08:16.400 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:16.400 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:16.659 Running I/O for 10 seconds... 00:08:18.529 7150.00 IOPS, 27.93 MiB/s [2024-11-20T08:21:07.025Z] 7423.50 IOPS, 29.00 MiB/s [2024-11-20T08:21:08.403Z] 7592.33 IOPS, 29.66 MiB/s [2024-11-20T08:21:09.339Z] 7688.50 IOPS, 30.03 MiB/s [2024-11-20T08:21:10.274Z] 7649.80 IOPS, 29.88 MiB/s [2024-11-20T08:21:11.281Z] 7684.17 IOPS, 30.02 MiB/s [2024-11-20T08:21:12.218Z] 7685.86 IOPS, 30.02 MiB/s [2024-11-20T08:21:13.153Z] 7695.50 IOPS, 30.06 MiB/s [2024-11-20T08:21:14.087Z] 7743.44 IOPS, 30.25 MiB/s [2024-11-20T08:21:14.087Z] 7735.80 IOPS, 30.22 MiB/s 00:08:26.526 Latency(us) 00:08:26.526 [2024-11-20T08:21:14.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.526 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:26.526 Verification LBA range: start 0x0 length 0x4000 00:08:26.526 NVMe0n1 : 10.08 7770.44 30.35 0.00 0.00 131062.53 21924.77 91035.46 00:08:26.526 [2024-11-20T08:21:14.087Z] =================================================================================================================== 00:08:26.526 [2024-11-20T08:21:14.087Z] Total : 7770.44 30.35 0.00 0.00 131062.53 21924.77 91035.46 00:08:26.526 { 00:08:26.526 "results": [ 00:08:26.526 { 00:08:26.526 "job": "NVMe0n1", 00:08:26.526 "core_mask": "0x1", 00:08:26.526 "workload": "verify", 00:08:26.526 "status": "finished", 00:08:26.526 "verify_range": { 00:08:26.526 "start": 0, 00:08:26.526 "length": 16384 00:08:26.526 }, 00:08:26.526 "queue_depth": 1024, 00:08:26.526 "io_size": 4096, 00:08:26.526 "runtime": 10.083857, 00:08:26.526 "iops": 7770.439426104515, 00:08:26.526 "mibps": 30.353279008220763, 00:08:26.526 "io_failed": 0, 00:08:26.526 "io_timeout": 0, 00:08:26.526 "avg_latency_us": 131062.52980940139, 00:08:26.526 "min_latency_us": 21924.77090909091, 00:08:26.526 "max_latency_us": 91035.46181818182 00:08:26.526 } 00:08:26.526 ], 00:08:26.526 "core_count": 1 00:08:26.526 } 00:08:26.526 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64258 00:08:26.526 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' -z 64258 ']' 00:08:26.526 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@961 -- # kill -0 64258 00:08:26.785 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # uname 00:08:26.785 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:08:26.785 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 64258 00:08:26.785 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:08:26.785 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:08:26.785 killing process with pid 64258 00:08:26.785 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@975 -- # echo 'killing process with pid 64258' 00:08:26.785 Received shutdown signal, test time was about 10.000000 seconds 00:08:26.785 00:08:26.785 Latency(us) 00:08:26.785 [2024-11-20T08:21:14.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.785 [2024-11-20T08:21:14.346Z] =================================================================================================================== 00:08:26.785 [2024-11-20T08:21:14.346Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:26.785 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # kill 64258 00:08:26.785 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@981 -- # wait 64258 00:08:26.785 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:26.785 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:26.785 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:26.785 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:27.042 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:27.042 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:27.042 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:27.042 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:27.042 rmmod nvme_tcp 00:08:27.042 rmmod nvme_fabrics 00:08:27.042 rmmod nvme_keyring 00:08:27.042 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:27.042 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:27.042 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:27.042 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64233 ']' 00:08:27.042 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64233 00:08:27.042 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' -z 64233 ']' 00:08:27.042 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@961 -- # kill -0 64233 00:08:27.042 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # uname 00:08:27.042 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:08:27.042 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 64233 00:08:27.042 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:08:27.042 killing process with pid 64233 00:08:27.042 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:08:27.043 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@975 -- # echo 'killing process with pid 64233' 00:08:27.043 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # kill 64233 00:08:27.043 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@981 -- # wait 64233 00:08:27.300 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:27.300 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:27.300 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:27.300 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:27.300 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:27.300 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:27.300 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:27.300 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:27.300 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:27.300 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:27.300 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:27.300 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:27.300 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:27.300 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:27.300 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:27.300 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:27.300 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:27.300 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:27.300 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:27.300 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:27.300 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:27.559 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:27.559 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:27.559 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.559 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.559 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.559 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:08:27.559 00:08:27.559 real 0m12.676s 00:08:27.559 user 0m21.482s 00:08:27.559 sys 0m2.240s 00:08:27.559 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1133 -- # xtrace_disable 00:08:27.559 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.559 ************************************ 00:08:27.559 END TEST nvmf_queue_depth 00:08:27.559 ************************************ 00:08:27.559 08:21:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:27.559 08:21:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:08:27.559 08:21:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1114 -- # xtrace_disable 00:08:27.559 08:21:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:27.559 ************************************ 00:08:27.559 START TEST nvmf_target_multipath 00:08:27.559 ************************************ 00:08:27.559 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:27.559 * Looking for test storage... 00:08:27.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:27.559 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:08:27.559 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1638 -- # lcov --version 00:08:27.559 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:08:27.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.818 --rc genhtml_branch_coverage=1 00:08:27.818 --rc genhtml_function_coverage=1 00:08:27.818 --rc genhtml_legend=1 00:08:27.818 --rc geninfo_all_blocks=1 00:08:27.818 --rc geninfo_unexecuted_blocks=1 00:08:27.818 00:08:27.818 ' 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:08:27.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.818 --rc genhtml_branch_coverage=1 00:08:27.818 --rc genhtml_function_coverage=1 00:08:27.818 --rc genhtml_legend=1 00:08:27.818 --rc geninfo_all_blocks=1 00:08:27.818 --rc geninfo_unexecuted_blocks=1 00:08:27.818 00:08:27.818 ' 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:08:27.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.818 --rc genhtml_branch_coverage=1 00:08:27.818 --rc genhtml_function_coverage=1 00:08:27.818 --rc genhtml_legend=1 00:08:27.818 --rc geninfo_all_blocks=1 00:08:27.818 --rc geninfo_unexecuted_blocks=1 00:08:27.818 00:08:27.818 ' 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:08:27.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.818 --rc genhtml_branch_coverage=1 00:08:27.818 --rc genhtml_function_coverage=1 00:08:27.818 --rc genhtml_legend=1 00:08:27.818 --rc geninfo_all_blocks=1 00:08:27.818 --rc geninfo_unexecuted_blocks=1 00:08:27.818 00:08:27.818 ' 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.818 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:27.819 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:27.819 Cannot find device "nvmf_init_br" 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:27.819 Cannot find device "nvmf_init_br2" 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:27.819 Cannot find device "nvmf_tgt_br" 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:27.819 Cannot find device "nvmf_tgt_br2" 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:27.819 Cannot find device "nvmf_init_br" 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:27.819 Cannot find device "nvmf_init_br2" 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:08:27.819 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:27.819 Cannot find device "nvmf_tgt_br" 00:08:27.820 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:08:27.820 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:27.820 Cannot find device "nvmf_tgt_br2" 00:08:27.820 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:08:27.820 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:27.820 Cannot find device "nvmf_br" 00:08:27.820 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:08:27.820 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:27.820 Cannot find device "nvmf_init_if" 00:08:27.820 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:08:27.820 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:28.078 Cannot find device "nvmf_init_if2" 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:28.079 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:28.079 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:28.079 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:28.079 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:08:28.079 00:08:28.079 --- 10.0.0.3 ping statistics --- 00:08:28.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.079 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:28.079 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:28.079 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:08:28.079 00:08:28.079 --- 10.0.0.4 ping statistics --- 00:08:28.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.079 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:28.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:08:28.079 00:08:28.079 --- 10.0.0.1 ping statistics --- 00:08:28.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.079 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:28.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:08:28.079 00:08:28.079 --- 10.0.0.2 ping statistics --- 00:08:28.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.079 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:28.079 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:28.338 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:08:28.338 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:28.338 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:28.338 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:28.338 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:28.338 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:28.338 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64637 00:08:28.338 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:28.338 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64637 00:08:28.338 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # '[' -z 64637 ']' 00:08:28.338 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.338 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@843 -- # local max_retries=100 00:08:28.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.338 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.338 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@847 -- # xtrace_disable 00:08:28.338 08:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:28.338 [2024-11-20 08:21:15.714591] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:28.338 [2024-11-20 08:21:15.715280] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.338 [2024-11-20 08:21:15.870099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:28.597 [2024-11-20 08:21:15.941594] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.597 [2024-11-20 08:21:15.941661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.597 [2024-11-20 08:21:15.941675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.597 [2024-11-20 08:21:15.941686] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.597 [2024-11-20 08:21:15.941695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.597 [2024-11-20 08:21:15.942924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.597 [2024-11-20 08:21:15.942999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.597 [2024-11-20 08:21:15.943064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.597 [2024-11-20 08:21:15.943064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.597 [2024-11-20 08:21:16.000411] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.533 08:21:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:08:29.533 08:21:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@871 -- # return 0 00:08:29.533 08:21:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:29.533 08:21:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@735 -- # xtrace_disable 00:08:29.533 08:21:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:29.533 08:21:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.533 08:21:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:29.533 [2024-11-20 08:21:17.047838] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.533 08:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:30.099 Malloc0 00:08:30.099 08:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:30.358 08:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:30.616 08:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:30.875 [2024-11-20 08:21:18.196294] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:30.875 08:21:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:08:31.134 [2024-11-20 08:21:18.468692] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:08:31.134 08:21:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:31.134 08:21:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:08:31.393 08:21:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:31.393 08:21:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # local i=0 00:08:31.393 08:21:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # local nvme_device_counter=1 nvme_devices=0 00:08:31.393 08:21:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # [[ -n '' ]] 00:08:31.393 08:21:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # sleep 2 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1213 -- # (( i++ <= 15 )) 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1214 -- # lsblk -l -o NAME,SERIAL 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1214 -- # grep -c SPDKISFASTANDAWESOME 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1214 -- # nvme_devices=1 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1215 -- # (( nvme_devices == nvme_device_counter )) 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1215 -- # return 0 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64732 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:33.296 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:33.296 [global] 00:08:33.296 thread=1 00:08:33.296 invalidate=1 00:08:33.296 rw=randrw 00:08:33.296 time_based=1 00:08:33.296 runtime=6 00:08:33.296 ioengine=libaio 00:08:33.296 direct=1 00:08:33.296 bs=4096 00:08:33.296 iodepth=128 00:08:33.296 norandommap=0 00:08:33.296 numjobs=1 00:08:33.296 00:08:33.296 verify_dump=1 00:08:33.296 verify_backlog=512 00:08:33.296 verify_state_save=0 00:08:33.296 do_verify=1 00:08:33.296 verify=crc32c-intel 00:08:33.296 [job0] 00:08:33.296 filename=/dev/nvme0n1 00:08:33.296 Could not set queue depth (nvme0n1) 00:08:33.555 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:33.555 fio-3.35 00:08:33.555 Starting 1 thread 00:08:34.491 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:34.750 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:35.009 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:35.009 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:35.009 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:35.009 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:35.009 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:35.009 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:35.009 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:35.009 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:35.009 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:35.009 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:35.009 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:35.009 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:35.009 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:35.268 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:35.527 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:35.527 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:35.527 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:35.528 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:35.528 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:35.528 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:35.528 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:35.528 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:35.528 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:35.528 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:35.528 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:35.528 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:35.528 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64732 00:08:39.719 00:08:39.719 job0: (groupid=0, jobs=1): err= 0: pid=64753: Wed Nov 20 08:21:27 2024 00:08:39.719 read: IOPS=10.0k, BW=39.1MiB/s (41.0MB/s)(235MiB/6007msec) 00:08:39.719 slat (usec): min=7, max=7393, avg=58.70, stdev=221.69 00:08:39.719 clat (usec): min=1051, max=16554, avg=8749.96, stdev=1490.18 00:08:39.719 lat (usec): min=1066, max=16589, avg=8808.66, stdev=1494.20 00:08:39.719 clat percentiles (usec): 00:08:39.719 | 1.00th=[ 4621], 5.00th=[ 6849], 10.00th=[ 7504], 20.00th=[ 7963], 00:08:39.719 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8848], 00:08:39.719 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9896], 95.00th=[12387], 00:08:39.719 | 99.00th=[13566], 99.50th=[13960], 99.90th=[14484], 99.95th=[14746], 00:08:39.719 | 99.99th=[15401] 00:08:39.719 bw ( KiB/s): min= 8440, max=26352, per=51.47%, avg=20628.00, stdev=5806.04, samples=12 00:08:39.719 iops : min= 2110, max= 6588, avg=5157.00, stdev=1451.51, samples=12 00:08:39.719 write: IOPS=5945, BW=23.2MiB/s (24.4MB/s)(121MiB/5220msec); 0 zone resets 00:08:39.719 slat (usec): min=15, max=2113, avg=68.14, stdev=159.41 00:08:39.719 clat (usec): min=2735, max=15461, avg=7578.17, stdev=1345.97 00:08:39.719 lat (usec): min=2763, max=15507, avg=7646.31, stdev=1350.72 00:08:39.719 clat percentiles (usec): 00:08:39.719 | 1.00th=[ 3589], 5.00th=[ 4555], 10.00th=[ 6063], 20.00th=[ 6980], 00:08:39.719 | 30.00th=[ 7308], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 7963], 00:08:39.719 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8717], 95.00th=[ 8979], 00:08:39.719 | 99.00th=[11731], 99.50th=[12387], 99.90th=[14353], 99.95th=[14746], 00:08:39.719 | 99.99th=[15008] 00:08:39.719 bw ( KiB/s): min= 8344, max=25808, per=86.84%, avg=20652.00, stdev=5673.98, samples=12 00:08:39.719 iops : min= 2086, max= 6452, avg=5163.00, stdev=1418.49, samples=12 00:08:39.719 lat (msec) : 2=0.02%, 4=1.04%, 10=91.89%, 20=7.05% 00:08:39.719 cpu : usr=5.59%, sys=24.06%, ctx=5323, majf=0, minf=108 00:08:39.719 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:39.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:39.719 issued rwts: total=60192,31036,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:39.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:39.719 00:08:39.719 Run status group 0 (all jobs): 00:08:39.719 READ: bw=39.1MiB/s (41.0MB/s), 39.1MiB/s-39.1MiB/s (41.0MB/s-41.0MB/s), io=235MiB (247MB), run=6007-6007msec 00:08:39.719 WRITE: bw=23.2MiB/s (24.4MB/s), 23.2MiB/s-23.2MiB/s (24.4MB/s-24.4MB/s), io=121MiB (127MB), run=5220-5220msec 00:08:39.719 00:08:39.719 Disk stats (read/write): 00:08:39.719 nvme0n1: ios=59574/30258, merge=0/0, ticks=498703/213749, in_queue=712452, util=98.63% 00:08:39.719 08:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:39.978 08:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:08:40.237 08:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:40.237 08:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:40.237 08:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:40.237 08:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:40.237 08:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:40.237 08:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:40.237 08:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:40.237 08:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:40.237 08:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:40.237 08:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:40.238 08:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:40.238 08:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:40.238 08:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:40.238 08:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:40.238 08:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=64834 00:08:40.238 08:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:40.238 [global] 00:08:40.238 thread=1 00:08:40.238 invalidate=1 00:08:40.238 rw=randrw 00:08:40.238 time_based=1 00:08:40.238 runtime=6 00:08:40.238 ioengine=libaio 00:08:40.238 direct=1 00:08:40.238 bs=4096 00:08:40.238 iodepth=128 00:08:40.238 norandommap=0 00:08:40.238 numjobs=1 00:08:40.238 00:08:40.238 verify_dump=1 00:08:40.238 verify_backlog=512 00:08:40.238 verify_state_save=0 00:08:40.238 do_verify=1 00:08:40.238 verify=crc32c-intel 00:08:40.238 [job0] 00:08:40.238 filename=/dev/nvme0n1 00:08:40.238 Could not set queue depth (nvme0n1) 00:08:40.496 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:40.496 fio-3.35 00:08:40.496 Starting 1 thread 00:08:41.432 08:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:41.691 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:41.949 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:41.949 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:41.949 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:41.949 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:41.949 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:41.949 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:41.949 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:41.949 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:41.949 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:41.949 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:41.949 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:41.949 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:41.949 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:42.207 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:42.468 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:42.468 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:42.468 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:42.468 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:42.468 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:42.468 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:42.468 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:42.468 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:42.468 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:42.468 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:42.468 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:42.468 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:42.468 08:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 64834 00:08:46.658 00:08:46.658 job0: (groupid=0, jobs=1): err= 0: pid=64855: Wed Nov 20 08:21:34 2024 00:08:46.658 read: IOPS=10.6k, BW=41.4MiB/s (43.4MB/s)(248MiB/6006msec) 00:08:46.658 slat (usec): min=5, max=11613, avg=46.95, stdev=215.49 00:08:46.658 clat (usec): min=304, max=34343, avg=8244.09, stdev=2643.46 00:08:46.658 lat (usec): min=319, max=34360, avg=8291.04, stdev=2666.69 00:08:46.658 clat percentiles (usec): 00:08:46.658 | 1.00th=[ 2966], 5.00th=[ 4359], 10.00th=[ 5080], 20.00th=[ 5997], 00:08:46.658 | 30.00th=[ 7046], 40.00th=[ 7898], 50.00th=[ 8455], 60.00th=[ 8848], 00:08:46.658 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[12256], 00:08:46.658 | 99.00th=[15533], 99.50th=[19530], 99.90th=[31589], 99.95th=[32375], 00:08:46.658 | 99.99th=[33817] 00:08:46.658 bw ( KiB/s): min= 48, max=35744, per=52.76%, avg=22348.64, stdev=9490.12, samples=11 00:08:46.658 iops : min= 12, max= 8936, avg=5587.09, stdev=2372.53, samples=11 00:08:46.658 write: IOPS=6442, BW=25.2MiB/s (26.4MB/s)(133MiB/5265msec); 0 zone resets 00:08:46.658 slat (usec): min=12, max=2629, avg=57.77, stdev=155.03 00:08:46.658 clat (usec): min=1532, max=33708, avg=7030.23, stdev=2489.71 00:08:46.658 lat (usec): min=1550, max=33741, avg=7088.01, stdev=2511.27 00:08:46.658 clat percentiles (usec): 00:08:46.658 | 1.00th=[ 2606], 5.00th=[ 3326], 10.00th=[ 3851], 20.00th=[ 4555], 00:08:46.658 | 30.00th=[ 5407], 40.00th=[ 6915], 50.00th=[ 7439], 60.00th=[ 7832], 00:08:46.658 | 70.00th=[ 8291], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[10028], 00:08:46.658 | 99.00th=[14222], 99.50th=[18482], 99.90th=[20579], 99.95th=[21627], 00:08:46.658 | 99.99th=[30540] 00:08:46.658 bw ( KiB/s): min= 40, max=36640, per=86.84%, avg=22379.91, stdev=9433.87, samples=11 00:08:46.658 iops : min= 10, max= 9160, avg=5594.91, stdev=2358.47, samples=11 00:08:46.658 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:08:46.658 lat (msec) : 2=0.14%, 4=6.22%, 10=81.16%, 20=12.06%, 50=0.38% 00:08:46.658 cpu : usr=5.58%, sys=22.18%, ctx=5350, majf=0, minf=139 00:08:46.658 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:46.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:46.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:46.658 issued rwts: total=63597,33920,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:46.658 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:46.658 00:08:46.658 Run status group 0 (all jobs): 00:08:46.658 READ: bw=41.4MiB/s (43.4MB/s), 41.4MiB/s-41.4MiB/s (43.4MB/s-43.4MB/s), io=248MiB (260MB), run=6006-6006msec 00:08:46.658 WRITE: bw=25.2MiB/s (26.4MB/s), 25.2MiB/s-25.2MiB/s (26.4MB/s-26.4MB/s), io=133MiB (139MB), run=5265-5265msec 00:08:46.658 00:08:46.658 Disk stats (read/write): 00:08:46.658 nvme0n1: ios=62979/33072, merge=0/0, ticks=496418/217972, in_queue=714390, util=98.65% 00:08:46.658 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:46.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:46.658 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:46.659 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1226 -- # local i=0 00:08:46.659 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -o NAME,SERIAL 00:08:46.659 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:46.659 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1234 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:46.659 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1234 -- # lsblk -l -o NAME,SERIAL 00:08:46.659 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1238 -- # return 0 00:08:46.659 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:46.916 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:46.916 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:46.916 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:46.916 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:08:46.916 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:46.916 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:46.916 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:46.916 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:46.916 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:46.916 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:46.916 rmmod nvme_tcp 00:08:47.175 rmmod nvme_fabrics 00:08:47.175 rmmod nvme_keyring 00:08:47.175 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:47.175 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:47.175 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:47.175 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64637 ']' 00:08:47.175 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64637 00:08:47.175 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # '[' -z 64637 ']' 00:08:47.175 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@961 -- # kill -0 64637 00:08:47.175 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@962 -- # uname 00:08:47.175 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:08:47.175 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 64637 00:08:47.175 killing process with pid 64637 00:08:47.175 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:08:47.175 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:08:47.175 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@975 -- # echo 'killing process with pid 64637' 00:08:47.175 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@976 -- # kill 64637 00:08:47.175 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@981 -- # wait 64637 00:08:47.434 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:47.434 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:47.434 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:47.434 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:47.434 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:47.434 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:47.434 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:47.434 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:47.434 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:47.434 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:47.434 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:47.434 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:47.434 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:47.434 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:47.434 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:47.434 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:47.434 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:47.434 08:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:47.694 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:47.694 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:47.694 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:47.694 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:47.694 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:47.694 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.694 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.694 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.694 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:08:47.694 00:08:47.694 real 0m20.159s 00:08:47.694 user 1m15.498s 00:08:47.694 sys 0m9.153s 00:08:47.694 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1133 -- # xtrace_disable 00:08:47.694 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:47.694 ************************************ 00:08:47.694 END TEST nvmf_target_multipath 00:08:47.694 ************************************ 00:08:47.694 08:21:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:47.694 08:21:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:08:47.694 08:21:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1114 -- # xtrace_disable 00:08:47.694 08:21:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.694 ************************************ 00:08:47.694 START TEST nvmf_zcopy 00:08:47.694 ************************************ 00:08:47.694 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:47.954 * Looking for test storage... 00:08:47.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1638 -- # lcov --version 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:08:47.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.954 --rc genhtml_branch_coverage=1 00:08:47.954 --rc genhtml_function_coverage=1 00:08:47.954 --rc genhtml_legend=1 00:08:47.954 --rc geninfo_all_blocks=1 00:08:47.954 --rc geninfo_unexecuted_blocks=1 00:08:47.954 00:08:47.954 ' 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:08:47.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.954 --rc genhtml_branch_coverage=1 00:08:47.954 --rc genhtml_function_coverage=1 00:08:47.954 --rc genhtml_legend=1 00:08:47.954 --rc geninfo_all_blocks=1 00:08:47.954 --rc geninfo_unexecuted_blocks=1 00:08:47.954 00:08:47.954 ' 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:08:47.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.954 --rc genhtml_branch_coverage=1 00:08:47.954 --rc genhtml_function_coverage=1 00:08:47.954 --rc genhtml_legend=1 00:08:47.954 --rc geninfo_all_blocks=1 00:08:47.954 --rc geninfo_unexecuted_blocks=1 00:08:47.954 00:08:47.954 ' 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:08:47.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.954 --rc genhtml_branch_coverage=1 00:08:47.954 --rc genhtml_function_coverage=1 00:08:47.954 --rc genhtml_legend=1 00:08:47.954 --rc geninfo_all_blocks=1 00:08:47.954 --rc geninfo_unexecuted_blocks=1 00:08:47.954 00:08:47.954 ' 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.954 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:47.955 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:47.955 Cannot find device "nvmf_init_br" 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:47.955 Cannot find device "nvmf_init_br2" 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:47.955 Cannot find device "nvmf_tgt_br" 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:08:47.955 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:47.955 Cannot find device "nvmf_tgt_br2" 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:48.213 Cannot find device "nvmf_init_br" 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:48.213 Cannot find device "nvmf_init_br2" 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:48.213 Cannot find device "nvmf_tgt_br" 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:48.213 Cannot find device "nvmf_tgt_br2" 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:48.213 Cannot find device "nvmf_br" 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:48.213 Cannot find device "nvmf_init_if" 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:48.213 Cannot find device "nvmf_init_if2" 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:48.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:48.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:48.213 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:48.472 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:48.472 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:08:48.472 00:08:48.472 --- 10.0.0.3 ping statistics --- 00:08:48.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.472 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:48.472 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:48.472 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.099 ms 00:08:48.472 00:08:48.472 --- 10.0.0.4 ping statistics --- 00:08:48.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.472 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:48.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:08:48.472 00:08:48.472 --- 10.0.0.1 ping statistics --- 00:08:48.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.472 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:48.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:08:48.472 00:08:48.472 --- 10.0.0.2 ping statistics --- 00:08:48.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.472 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65171 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65171 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # '[' -z 65171 ']' 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@843 -- # local max_retries=100 00:08:48.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@847 -- # xtrace_disable 00:08:48.472 08:21:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:48.472 [2024-11-20 08:21:35.958904] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:48.472 [2024-11-20 08:21:35.959026] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.730 [2024-11-20 08:21:36.123985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.730 [2024-11-20 08:21:36.190154] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.730 [2024-11-20 08:21:36.190219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.730 [2024-11-20 08:21:36.190234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.730 [2024-11-20 08:21:36.190244] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.730 [2024-11-20 08:21:36.190253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.730 [2024-11-20 08:21:36.190737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.730 [2024-11-20 08:21:36.252108] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@871 -- # return 0 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@735 -- # xtrace_disable 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.000 [2024-11-20 08:21:36.380973] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.000 [2024-11-20 08:21:36.397071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.000 malloc0 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@566 -- # xtrace_disable 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:49.000 { 00:08:49.000 "params": { 00:08:49.000 "name": "Nvme$subsystem", 00:08:49.000 "trtype": "$TEST_TRANSPORT", 00:08:49.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:49.000 "adrfam": "ipv4", 00:08:49.000 "trsvcid": "$NVMF_PORT", 00:08:49.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:49.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:49.000 "hdgst": ${hdgst:-false}, 00:08:49.000 "ddgst": ${ddgst:-false} 00:08:49.000 }, 00:08:49.000 "method": "bdev_nvme_attach_controller" 00:08:49.000 } 00:08:49.000 EOF 00:08:49.000 )") 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:49.000 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:49.001 08:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:49.001 "params": { 00:08:49.001 "name": "Nvme1", 00:08:49.001 "trtype": "tcp", 00:08:49.001 "traddr": "10.0.0.3", 00:08:49.001 "adrfam": "ipv4", 00:08:49.001 "trsvcid": "4420", 00:08:49.001 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:49.001 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:49.001 "hdgst": false, 00:08:49.001 "ddgst": false 00:08:49.001 }, 00:08:49.001 "method": "bdev_nvme_attach_controller" 00:08:49.001 }' 00:08:49.001 [2024-11-20 08:21:36.496363] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:49.001 [2024-11-20 08:21:36.496511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65191 ] 00:08:49.259 [2024-11-20 08:21:36.650489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.259 [2024-11-20 08:21:36.717898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.259 [2024-11-20 08:21:36.787234] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.522 Running I/O for 10 seconds... 00:08:51.407 6052.00 IOPS, 47.28 MiB/s [2024-11-20T08:21:40.341Z] 6006.50 IOPS, 46.93 MiB/s [2024-11-20T08:21:41.277Z] 5983.67 IOPS, 46.75 MiB/s [2024-11-20T08:21:42.211Z] 5991.75 IOPS, 46.81 MiB/s [2024-11-20T08:21:43.145Z] 6010.00 IOPS, 46.95 MiB/s [2024-11-20T08:21:44.080Z] 6046.00 IOPS, 47.23 MiB/s [2024-11-20T08:21:45.015Z] 6051.86 IOPS, 47.28 MiB/s [2024-11-20T08:21:45.952Z] 6037.25 IOPS, 47.17 MiB/s [2024-11-20T08:21:47.339Z] 6063.33 IOPS, 47.37 MiB/s [2024-11-20T08:21:47.339Z] 6053.40 IOPS, 47.29 MiB/s 00:08:59.778 Latency(us) 00:08:59.778 [2024-11-20T08:21:47.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.778 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:59.778 Verification LBA range: start 0x0 length 0x1000 00:08:59.778 Nvme1n1 : 10.02 6056.72 47.32 0.00 0.00 21071.12 2249.08 29550.78 00:08:59.778 [2024-11-20T08:21:47.339Z] =================================================================================================================== 00:08:59.778 [2024-11-20T08:21:47.339Z] Total : 6056.72 47.32 0.00 0.00 21071.12 2249.08 29550.78 00:08:59.778 08:21:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65314 00:08:59.778 08:21:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:59.778 08:21:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.778 08:21:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:59.778 08:21:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:59.778 08:21:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:59.778 08:21:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:59.778 08:21:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:59.778 08:21:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:59.778 { 00:08:59.778 "params": { 00:08:59.778 "name": "Nvme$subsystem", 00:08:59.778 "trtype": "$TEST_TRANSPORT", 00:08:59.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:59.778 "adrfam": "ipv4", 00:08:59.778 "trsvcid": "$NVMF_PORT", 00:08:59.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:59.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:59.778 "hdgst": ${hdgst:-false}, 00:08:59.778 "ddgst": ${ddgst:-false} 00:08:59.778 }, 00:08:59.778 "method": "bdev_nvme_attach_controller" 00:08:59.778 } 00:08:59.778 EOF 00:08:59.778 )") 00:08:59.778 08:21:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:59.778 [2024-11-20 08:21:47.173472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.778 [2024-11-20 08:21:47.173539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.778 08:21:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:59.778 08:21:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:59.778 08:21:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:59.779 "params": { 00:08:59.779 "name": "Nvme1", 00:08:59.779 "trtype": "tcp", 00:08:59.779 "traddr": "10.0.0.3", 00:08:59.779 "adrfam": "ipv4", 00:08:59.779 "trsvcid": "4420", 00:08:59.779 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:59.779 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:59.779 "hdgst": false, 00:08:59.779 "ddgst": false 00:08:59.779 }, 00:08:59.779 "method": "bdev_nvme_attach_controller" 00:08:59.779 }' 00:08:59.779 [2024-11-20 08:21:47.181403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.779 [2024-11-20 08:21:47.181439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.779 [2024-11-20 08:21:47.189413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.779 [2024-11-20 08:21:47.189451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.779 [2024-11-20 08:21:47.201412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.779 [2024-11-20 08:21:47.201447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.779 [2024-11-20 08:21:47.209418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.779 [2024-11-20 08:21:47.209454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.779 [2024-11-20 08:21:47.221410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.779 [2024-11-20 08:21:47.221445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.779 [2024-11-20 08:21:47.229429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.779 [2024-11-20 08:21:47.229464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.779 [2024-11-20 08:21:47.232962] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:08:59.779 [2024-11-20 08:21:47.233066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65314 ] 00:08:59.779 [2024-11-20 08:21:47.237420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.779 [2024-11-20 08:21:47.237457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.779 [2024-11-20 08:21:47.249423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.779 [2024-11-20 08:21:47.249461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.779 [2024-11-20 08:21:47.261427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.779 [2024-11-20 08:21:47.261463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.779 [2024-11-20 08:21:47.273437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.779 [2024-11-20 08:21:47.273479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.779 [2024-11-20 08:21:47.285448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.779 [2024-11-20 08:21:47.285494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.779 [2024-11-20 08:21:47.297443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.779 [2024-11-20 08:21:47.297480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.779 [2024-11-20 08:21:47.309442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.779 [2024-11-20 08:21:47.309477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.779 [2024-11-20 08:21:47.321452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.779 [2024-11-20 08:21:47.321496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.779 [2024-11-20 08:21:47.333456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.779 [2024-11-20 08:21:47.333497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.038 [2024-11-20 08:21:47.341455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.038 [2024-11-20 08:21:47.341499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.038 [2024-11-20 08:21:47.349451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.038 [2024-11-20 08:21:47.349485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.038 [2024-11-20 08:21:47.357461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.038 [2024-11-20 08:21:47.357503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.038 [2024-11-20 08:21:47.365452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.038 [2024-11-20 08:21:47.365489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.038 [2024-11-20 08:21:47.377477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.038 [2024-11-20 08:21:47.377524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.038 [2024-11-20 08:21:47.385469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.038 [2024-11-20 08:21:47.385504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.038 [2024-11-20 08:21:47.393475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.038 [2024-11-20 08:21:47.393514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.038 [2024-11-20 08:21:47.405479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.038 [2024-11-20 08:21:47.405524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.038 [2024-11-20 08:21:47.413488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.038 [2024-11-20 08:21:47.413531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.038 [2024-11-20 08:21:47.425470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.038 [2024-11-20 08:21:47.425504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.038 [2024-11-20 08:21:47.436229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.038 [2024-11-20 08:21:47.437491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.038 [2024-11-20 08:21:47.437540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.038 [2024-11-20 08:21:47.449501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.038 [2024-11-20 08:21:47.449550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.038 [2024-11-20 08:21:47.461492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.038 [2024-11-20 08:21:47.461529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.038 [2024-11-20 08:21:47.469491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.038 [2024-11-20 08:21:47.469526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.038 [2024-11-20 08:21:47.477493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.038 [2024-11-20 08:21:47.477533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.038 [2024-11-20 08:21:47.485505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.038 [2024-11-20 08:21:47.485550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.038 [2024-11-20 08:21:47.493505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.038 [2024-11-20 08:21:47.493544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.038 [2024-11-20 08:21:47.501498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.039 [2024-11-20 08:21:47.501535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.039 [2024-11-20 08:21:47.509513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.039 [2024-11-20 08:21:47.509559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.039 [2024-11-20 08:21:47.517501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.039 [2024-11-20 08:21:47.517538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.039 [2024-11-20 08:21:47.522819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.039 [2024-11-20 08:21:47.529519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.039 [2024-11-20 08:21:47.529560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.039 [2024-11-20 08:21:47.537521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.039 [2024-11-20 08:21:47.537562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.039 [2024-11-20 08:21:47.549530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.039 [2024-11-20 08:21:47.549571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.039 [2024-11-20 08:21:47.557510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.039 [2024-11-20 08:21:47.557543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.039 [2024-11-20 08:21:47.569537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.039 [2024-11-20 08:21:47.569580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.039 [2024-11-20 08:21:47.577532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.039 [2024-11-20 08:21:47.577574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.039 [2024-11-20 08:21:47.585518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.039 [2024-11-20 08:21:47.585557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.039 [2024-11-20 08:21:47.593540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.039 [2024-11-20 08:21:47.593576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.298 [2024-11-20 08:21:47.601525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.298 [2024-11-20 08:21:47.601557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.298 [2024-11-20 08:21:47.604436] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:00.298 [2024-11-20 08:21:47.613534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.298 [2024-11-20 08:21:47.613571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.298 [2024-11-20 08:21:47.621530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.298 [2024-11-20 08:21:47.621575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.298 [2024-11-20 08:21:47.629533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.298 [2024-11-20 08:21:47.629571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.298 [2024-11-20 08:21:47.637535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.298 [2024-11-20 08:21:47.637566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.298 [2024-11-20 08:21:47.649545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.298 [2024-11-20 08:21:47.649580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.298 [2024-11-20 08:21:47.661558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.298 [2024-11-20 08:21:47.661599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.298 [2024-11-20 08:21:47.673864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.298 [2024-11-20 08:21:47.673901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.298 [2024-11-20 08:21:47.685864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.298 [2024-11-20 08:21:47.685899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.298 [2024-11-20 08:21:47.697891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.298 [2024-11-20 08:21:47.697928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.298 [2024-11-20 08:21:47.709894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.298 [2024-11-20 08:21:47.709932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.298 [2024-11-20 08:21:47.721919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.298 [2024-11-20 08:21:47.721960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.298 [2024-11-20 08:21:47.733973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.298 [2024-11-20 08:21:47.734016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.298 Running I/O for 5 seconds... 00:09:00.298 [2024-11-20 08:21:47.746020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.298 [2024-11-20 08:21:47.746066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.298 [2024-11-20 08:21:47.763763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.298 [2024-11-20 08:21:47.763861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.298 [2024-11-20 08:21:47.779646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.298 [2024-11-20 08:21:47.779722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.298 [2024-11-20 08:21:47.796918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.298 [2024-11-20 08:21:47.797015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.298 [2024-11-20 08:21:47.812596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.298 [2024-11-20 08:21:47.812680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.298 [2024-11-20 08:21:47.823487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.298 [2024-11-20 08:21:47.823570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.298 [2024-11-20 08:21:47.838605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.298 [2024-11-20 08:21:47.838678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.298 [2024-11-20 08:21:47.854660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.298 [2024-11-20 08:21:47.854734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.557 [2024-11-20 08:21:47.864561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.557 [2024-11-20 08:21:47.864643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.557 [2024-11-20 08:21:47.881244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.557 [2024-11-20 08:21:47.881311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.557 [2024-11-20 08:21:47.895429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.557 [2024-11-20 08:21:47.895529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.557 [2024-11-20 08:21:47.912089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.557 [2024-11-20 08:21:47.912167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.557 [2024-11-20 08:21:47.928087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.557 [2024-11-20 08:21:47.928172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.557 [2024-11-20 08:21:47.946147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.557 [2024-11-20 08:21:47.946249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.557 [2024-11-20 08:21:47.961934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.557 [2024-11-20 08:21:47.961988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.557 [2024-11-20 08:21:47.978504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.557 [2024-11-20 08:21:47.978559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.557 [2024-11-20 08:21:47.997092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.557 [2024-11-20 08:21:47.997143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.557 [2024-11-20 08:21:48.011540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.557 [2024-11-20 08:21:48.011577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.557 [2024-11-20 08:21:48.026950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.557 [2024-11-20 08:21:48.026987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.557 [2024-11-20 08:21:48.035811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.557 [2024-11-20 08:21:48.035859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.557 [2024-11-20 08:21:48.050985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.557 [2024-11-20 08:21:48.051021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.557 [2024-11-20 08:21:48.067655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.557 [2024-11-20 08:21:48.067756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.557 [2024-11-20 08:21:48.082721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.557 [2024-11-20 08:21:48.082767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.557 [2024-11-20 08:21:48.093364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.557 [2024-11-20 08:21:48.093412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.558 [2024-11-20 08:21:48.107914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.558 [2024-11-20 08:21:48.107952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.816 [2024-11-20 08:21:48.124436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.816 [2024-11-20 08:21:48.124473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.816 [2024-11-20 08:21:48.140188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.816 [2024-11-20 08:21:48.140225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.816 [2024-11-20 08:21:48.158267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.816 [2024-11-20 08:21:48.158318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.816 [2024-11-20 08:21:48.173618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.816 [2024-11-20 08:21:48.173657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.816 [2024-11-20 08:21:48.190461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.816 [2024-11-20 08:21:48.190498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.816 [2024-11-20 08:21:48.207380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.816 [2024-11-20 08:21:48.207417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.817 [2024-11-20 08:21:48.224085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.817 [2024-11-20 08:21:48.224124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.817 [2024-11-20 08:21:48.240495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.817 [2024-11-20 08:21:48.240532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.817 [2024-11-20 08:21:48.257883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.817 [2024-11-20 08:21:48.257943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.817 [2024-11-20 08:21:48.272164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.817 [2024-11-20 08:21:48.272202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.817 [2024-11-20 08:21:48.287460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.817 [2024-11-20 08:21:48.287497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.817 [2024-11-20 08:21:48.306472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.817 [2024-11-20 08:21:48.306508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.817 [2024-11-20 08:21:48.320293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.817 [2024-11-20 08:21:48.320340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.817 [2024-11-20 08:21:48.335459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.817 [2024-11-20 08:21:48.335519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.817 [2024-11-20 08:21:48.346581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.817 [2024-11-20 08:21:48.346640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.817 [2024-11-20 08:21:48.362385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.817 [2024-11-20 08:21:48.362420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.075 [2024-11-20 08:21:48.379092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.075 [2024-11-20 08:21:48.379149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.075 [2024-11-20 08:21:48.395908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.075 [2024-11-20 08:21:48.395947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.075 [2024-11-20 08:21:48.413597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.075 [2024-11-20 08:21:48.413633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.075 [2024-11-20 08:21:48.432193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.075 [2024-11-20 08:21:48.432241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.075 [2024-11-20 08:21:48.446255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.075 [2024-11-20 08:21:48.446294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.075 [2024-11-20 08:21:48.462166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.075 [2024-11-20 08:21:48.462226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.075 [2024-11-20 08:21:48.479674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.075 [2024-11-20 08:21:48.479717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.075 [2024-11-20 08:21:48.494777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.075 [2024-11-20 08:21:48.494822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.075 [2024-11-20 08:21:48.511044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.075 [2024-11-20 08:21:48.511077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.075 [2024-11-20 08:21:48.528162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.075 [2024-11-20 08:21:48.528224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.075 [2024-11-20 08:21:48.543196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.075 [2024-11-20 08:21:48.543243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.075 [2024-11-20 08:21:48.560461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.075 [2024-11-20 08:21:48.560495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.075 [2024-11-20 08:21:48.575776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.075 [2024-11-20 08:21:48.575839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.075 [2024-11-20 08:21:48.591180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.075 [2024-11-20 08:21:48.591224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.075 [2024-11-20 08:21:48.601407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.075 [2024-11-20 08:21:48.601466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.075 [2024-11-20 08:21:48.617260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.075 [2024-11-20 08:21:48.617345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.075 [2024-11-20 08:21:48.634490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.075 [2024-11-20 08:21:48.634524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.332 [2024-11-20 08:21:48.650620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.332 [2024-11-20 08:21:48.650655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.332 [2024-11-20 08:21:48.660025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.332 [2024-11-20 08:21:48.660063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.332 [2024-11-20 08:21:48.676397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.332 [2024-11-20 08:21:48.676444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.332 [2024-11-20 08:21:48.692499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.332 [2024-11-20 08:21:48.692543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.332 [2024-11-20 08:21:48.701938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.332 [2024-11-20 08:21:48.701979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.332 [2024-11-20 08:21:48.716577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.332 [2024-11-20 08:21:48.716612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.332 [2024-11-20 08:21:48.733562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.332 [2024-11-20 08:21:48.733595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.332 11639.00 IOPS, 90.93 MiB/s [2024-11-20T08:21:48.894Z] [2024-11-20 08:21:48.748937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.333 [2024-11-20 08:21:48.748974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.333 [2024-11-20 08:21:48.758404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.333 [2024-11-20 08:21:48.758441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.333 [2024-11-20 08:21:48.775009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.333 [2024-11-20 08:21:48.775083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.333 [2024-11-20 08:21:48.792445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.333 [2024-11-20 08:21:48.792507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.333 [2024-11-20 08:21:48.808791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.333 [2024-11-20 08:21:48.808847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.333 [2024-11-20 08:21:48.818711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.333 [2024-11-20 08:21:48.818771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.333 [2024-11-20 08:21:48.834375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.333 [2024-11-20 08:21:48.834422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.333 [2024-11-20 08:21:48.851136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.333 [2024-11-20 08:21:48.851169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.333 [2024-11-20 08:21:48.867774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.333 [2024-11-20 08:21:48.867825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.333 [2024-11-20 08:21:48.884580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.333 [2024-11-20 08:21:48.884637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.590 [2024-11-20 08:21:48.900950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.590 [2024-11-20 08:21:48.901001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.590 [2024-11-20 08:21:48.916746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.590 [2024-11-20 08:21:48.916783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.590 [2024-11-20 08:21:48.935227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.590 [2024-11-20 08:21:48.935289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.590 [2024-11-20 08:21:48.950524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.590 [2024-11-20 08:21:48.950560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.590 [2024-11-20 08:21:48.967100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.590 [2024-11-20 08:21:48.967137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.590 [2024-11-20 08:21:48.984988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.590 [2024-11-20 08:21:48.985022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.590 [2024-11-20 08:21:49.000123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.590 [2024-11-20 08:21:49.000187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.590 [2024-11-20 08:21:49.016382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.590 [2024-11-20 08:21:49.016417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.590 [2024-11-20 08:21:49.033006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.590 [2024-11-20 08:21:49.033058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.590 [2024-11-20 08:21:49.049893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.590 [2024-11-20 08:21:49.049944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.590 [2024-11-20 08:21:49.066328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.590 [2024-11-20 08:21:49.066396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.590 [2024-11-20 08:21:49.083781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.590 [2024-11-20 08:21:49.083830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.590 [2024-11-20 08:21:49.099771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.590 [2024-11-20 08:21:49.099837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.590 [2024-11-20 08:21:49.116859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.590 [2024-11-20 08:21:49.116905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.590 [2024-11-20 08:21:49.133692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.590 [2024-11-20 08:21:49.133747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.590 [2024-11-20 08:21:49.148597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.590 [2024-11-20 08:21:49.148634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.847 [2024-11-20 08:21:49.164803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.847 [2024-11-20 08:21:49.164854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.847 [2024-11-20 08:21:49.180584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.847 [2024-11-20 08:21:49.180620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.847 [2024-11-20 08:21:49.197336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.847 [2024-11-20 08:21:49.197397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.847 [2024-11-20 08:21:49.214319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.847 [2024-11-20 08:21:49.214364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.848 [2024-11-20 08:21:49.231016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.848 [2024-11-20 08:21:49.231054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.848 [2024-11-20 08:21:49.247652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.848 [2024-11-20 08:21:49.247691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.848 [2024-11-20 08:21:49.263899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.848 [2024-11-20 08:21:49.263936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.848 [2024-11-20 08:21:49.273574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.848 [2024-11-20 08:21:49.273615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.848 [2024-11-20 08:21:49.284983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.848 [2024-11-20 08:21:49.285021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.848 [2024-11-20 08:21:49.296314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.848 [2024-11-20 08:21:49.296365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.848 [2024-11-20 08:21:49.307973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.848 [2024-11-20 08:21:49.308019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.848 [2024-11-20 08:21:49.324143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.848 [2024-11-20 08:21:49.324181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.848 [2024-11-20 08:21:49.340347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.848 [2024-11-20 08:21:49.340396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.848 [2024-11-20 08:21:49.357820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.848 [2024-11-20 08:21:49.357873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.848 [2024-11-20 08:21:49.373005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.848 [2024-11-20 08:21:49.373045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.848 [2024-11-20 08:21:49.383086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.848 [2024-11-20 08:21:49.383130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.848 [2024-11-20 08:21:49.398875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.848 [2024-11-20 08:21:49.398914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.105 [2024-11-20 08:21:49.415294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.105 [2024-11-20 08:21:49.415335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.105 [2024-11-20 08:21:49.432349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.105 [2024-11-20 08:21:49.432387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.105 [2024-11-20 08:21:49.447147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.105 [2024-11-20 08:21:49.447188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.105 [2024-11-20 08:21:49.462921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.105 [2024-11-20 08:21:49.462968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.105 [2024-11-20 08:21:49.479405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.105 [2024-11-20 08:21:49.479456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.105 [2024-11-20 08:21:49.495692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.105 [2024-11-20 08:21:49.495774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.105 [2024-11-20 08:21:49.511807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.105 [2024-11-20 08:21:49.511860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.105 [2024-11-20 08:21:49.530226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.105 [2024-11-20 08:21:49.530323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.105 [2024-11-20 08:21:49.540175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.105 [2024-11-20 08:21:49.540212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.105 [2024-11-20 08:21:49.554790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.105 [2024-11-20 08:21:49.554842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.105 [2024-11-20 08:21:49.571642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.105 [2024-11-20 08:21:49.571694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.105 [2024-11-20 08:21:49.588165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.105 [2024-11-20 08:21:49.588202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.105 [2024-11-20 08:21:49.605159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.105 [2024-11-20 08:21:49.605204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.105 [2024-11-20 08:21:49.614752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.105 [2024-11-20 08:21:49.614789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.105 [2024-11-20 08:21:49.628868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.105 [2024-11-20 08:21:49.628902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.105 [2024-11-20 08:21:49.644494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.105 [2024-11-20 08:21:49.644528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.105 [2024-11-20 08:21:49.660622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.105 [2024-11-20 08:21:49.660657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.363 [2024-11-20 08:21:49.670057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.363 [2024-11-20 08:21:49.670095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.363 [2024-11-20 08:21:49.685355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.363 [2024-11-20 08:21:49.685391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.363 [2024-11-20 08:21:49.700825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.363 [2024-11-20 08:21:49.700881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.363 [2024-11-20 08:21:49.719647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.363 [2024-11-20 08:21:49.719690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.363 [2024-11-20 08:21:49.735180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.363 [2024-11-20 08:21:49.735227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.363 11604.50 IOPS, 90.66 MiB/s [2024-11-20T08:21:49.924Z] [2024-11-20 08:21:49.753095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.363 [2024-11-20 08:21:49.753149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.363 [2024-11-20 08:21:49.769128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.363 [2024-11-20 08:21:49.769185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.363 [2024-11-20 08:21:49.787725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.363 [2024-11-20 08:21:49.787794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.363 [2024-11-20 08:21:49.803551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.363 [2024-11-20 08:21:49.803604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.363 [2024-11-20 08:21:49.819758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.363 [2024-11-20 08:21:49.819830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.363 [2024-11-20 08:21:49.837889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.363 [2024-11-20 08:21:49.837953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.363 [2024-11-20 08:21:49.852948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.363 [2024-11-20 08:21:49.853009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.363 [2024-11-20 08:21:49.864232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.363 [2024-11-20 08:21:49.864302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.363 [2024-11-20 08:21:49.880459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.363 [2024-11-20 08:21:49.880515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.364 [2024-11-20 08:21:49.896771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.364 [2024-11-20 08:21:49.896824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.364 [2024-11-20 08:21:49.908485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.364 [2024-11-20 08:21:49.908537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.622 [2024-11-20 08:21:49.924202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.622 [2024-11-20 08:21:49.924237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.622 [2024-11-20 08:21:49.933982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.622 [2024-11-20 08:21:49.934050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.622 [2024-11-20 08:21:49.948567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.622 [2024-11-20 08:21:49.948612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.622 [2024-11-20 08:21:49.963534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.622 [2024-11-20 08:21:49.963584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.622 [2024-11-20 08:21:49.974726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.622 [2024-11-20 08:21:49.974777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.622 [2024-11-20 08:21:49.989945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.622 [2024-11-20 08:21:49.989984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.622 [2024-11-20 08:21:50.001481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.622 [2024-11-20 08:21:50.001512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.622 [2024-11-20 08:21:50.017320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.622 [2024-11-20 08:21:50.017353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.622 [2024-11-20 08:21:50.034193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.622 [2024-11-20 08:21:50.034240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.622 [2024-11-20 08:21:50.050539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.622 [2024-11-20 08:21:50.050574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.622 [2024-11-20 08:21:50.067709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.622 [2024-11-20 08:21:50.067769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.622 [2024-11-20 08:21:50.083987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.622 [2024-11-20 08:21:50.084027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.622 [2024-11-20 08:21:50.100602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.622 [2024-11-20 08:21:50.100645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.622 [2024-11-20 08:21:50.116393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.622 [2024-11-20 08:21:50.116439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.622 [2024-11-20 08:21:50.127465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.622 [2024-11-20 08:21:50.127510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.622 [2024-11-20 08:21:50.143153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.622 [2024-11-20 08:21:50.143188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.622 [2024-11-20 08:21:50.160648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.622 [2024-11-20 08:21:50.160684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.622 [2024-11-20 08:21:50.174984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.622 [2024-11-20 08:21:50.175021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.880 [2024-11-20 08:21:50.189644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.880 [2024-11-20 08:21:50.189697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.880 [2024-11-20 08:21:50.205247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.880 [2024-11-20 08:21:50.205316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.880 [2024-11-20 08:21:50.215166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.880 [2024-11-20 08:21:50.215214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.880 [2024-11-20 08:21:50.227474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.880 [2024-11-20 08:21:50.227521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.880 [2024-11-20 08:21:50.242443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.880 [2024-11-20 08:21:50.242484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.880 [2024-11-20 08:21:50.258236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.880 [2024-11-20 08:21:50.258300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.880 [2024-11-20 08:21:50.274514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.880 [2024-11-20 08:21:50.274580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.880 [2024-11-20 08:21:50.293191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.880 [2024-11-20 08:21:50.293260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.880 [2024-11-20 08:21:50.307910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.880 [2024-11-20 08:21:50.307963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.880 [2024-11-20 08:21:50.323396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.880 [2024-11-20 08:21:50.323447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.880 [2024-11-20 08:21:50.341436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.880 [2024-11-20 08:21:50.341497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.880 [2024-11-20 08:21:50.355931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.880 [2024-11-20 08:21:50.355972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.880 [2024-11-20 08:21:50.365242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.880 [2024-11-20 08:21:50.365289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.880 [2024-11-20 08:21:50.381131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.880 [2024-11-20 08:21:50.381177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.880 [2024-11-20 08:21:50.397341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.880 [2024-11-20 08:21:50.397376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.880 [2024-11-20 08:21:50.415650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.880 [2024-11-20 08:21:50.415704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.880 [2024-11-20 08:21:50.430784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.880 [2024-11-20 08:21:50.430835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.139 [2024-11-20 08:21:50.446858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.139 [2024-11-20 08:21:50.446926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.139 [2024-11-20 08:21:50.463926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.139 [2024-11-20 08:21:50.463962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.139 [2024-11-20 08:21:50.481233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.139 [2024-11-20 08:21:50.481268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.139 [2024-11-20 08:21:50.496745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.139 [2024-11-20 08:21:50.496783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.139 [2024-11-20 08:21:50.515328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.139 [2024-11-20 08:21:50.515364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.139 [2024-11-20 08:21:50.530675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.139 [2024-11-20 08:21:50.530714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.139 [2024-11-20 08:21:50.540859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.139 [2024-11-20 08:21:50.540908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.139 [2024-11-20 08:21:50.555338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.139 [2024-11-20 08:21:50.555373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.139 [2024-11-20 08:21:50.565511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.139 [2024-11-20 08:21:50.565561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.139 [2024-11-20 08:21:50.581024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.139 [2024-11-20 08:21:50.581057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.139 [2024-11-20 08:21:50.595942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.139 [2024-11-20 08:21:50.595990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.139 [2024-11-20 08:21:50.605449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.139 [2024-11-20 08:21:50.605495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.139 [2024-11-20 08:21:50.622248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.139 [2024-11-20 08:21:50.622284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.139 [2024-11-20 08:21:50.639258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.139 [2024-11-20 08:21:50.639293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.139 [2024-11-20 08:21:50.655402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.139 [2024-11-20 08:21:50.655437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.139 [2024-11-20 08:21:50.673116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.139 [2024-11-20 08:21:50.673150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.139 [2024-11-20 08:21:50.689290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.139 [2024-11-20 08:21:50.689325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.398 [2024-11-20 08:21:50.705633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.398 [2024-11-20 08:21:50.705673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.398 [2024-11-20 08:21:50.721563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.398 [2024-11-20 08:21:50.721601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.398 [2024-11-20 08:21:50.740228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.398 [2024-11-20 08:21:50.740267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.398 11651.00 IOPS, 91.02 MiB/s [2024-11-20T08:21:50.959Z] [2024-11-20 08:21:50.755130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.398 [2024-11-20 08:21:50.755168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.398 [2024-11-20 08:21:50.765155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.398 [2024-11-20 08:21:50.765193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.398 [2024-11-20 08:21:50.780466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.398 [2024-11-20 08:21:50.780504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.398 [2024-11-20 08:21:50.790817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.398 [2024-11-20 08:21:50.790882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.398 [2024-11-20 08:21:50.806780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.398 [2024-11-20 08:21:50.806836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.398 [2024-11-20 08:21:50.821901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.398 [2024-11-20 08:21:50.821940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.398 [2024-11-20 08:21:50.838596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.398 [2024-11-20 08:21:50.838659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.398 [2024-11-20 08:21:50.854620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.398 [2024-11-20 08:21:50.854656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.398 [2024-11-20 08:21:50.873373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.398 [2024-11-20 08:21:50.873412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.398 [2024-11-20 08:21:50.888601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.398 [2024-11-20 08:21:50.888652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.398 [2024-11-20 08:21:50.907303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.398 [2024-11-20 08:21:50.907338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.398 [2024-11-20 08:21:50.921980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.398 [2024-11-20 08:21:50.922017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.398 [2024-11-20 08:21:50.937272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.398 [2024-11-20 08:21:50.937306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.398 [2024-11-20 08:21:50.954856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.398 [2024-11-20 08:21:50.954901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.657 [2024-11-20 08:21:50.970532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.657 [2024-11-20 08:21:50.970568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.657 [2024-11-20 08:21:50.988306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.657 [2024-11-20 08:21:50.988341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.657 [2024-11-20 08:21:51.003150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.657 [2024-11-20 08:21:51.003185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.657 [2024-11-20 08:21:51.019409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.657 [2024-11-20 08:21:51.019444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.657 [2024-11-20 08:21:51.036253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.657 [2024-11-20 08:21:51.036288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.657 [2024-11-20 08:21:51.053323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.657 [2024-11-20 08:21:51.053356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.657 [2024-11-20 08:21:51.070330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.657 [2024-11-20 08:21:51.070377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.657 [2024-11-20 08:21:51.085404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.657 [2024-11-20 08:21:51.085444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.657 [2024-11-20 08:21:51.101761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.657 [2024-11-20 08:21:51.101830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.657 [2024-11-20 08:21:51.118637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.657 [2024-11-20 08:21:51.118691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.657 [2024-11-20 08:21:51.135336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.657 [2024-11-20 08:21:51.135384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.657 [2024-11-20 08:21:51.151242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.657 [2024-11-20 08:21:51.151277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.657 [2024-11-20 08:21:51.168780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.657 [2024-11-20 08:21:51.168827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.657 [2024-11-20 08:21:51.184079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.657 [2024-11-20 08:21:51.184123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.657 [2024-11-20 08:21:51.192982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.657 [2024-11-20 08:21:51.193039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.657 [2024-11-20 08:21:51.206475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.657 [2024-11-20 08:21:51.206525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.916 [2024-11-20 08:21:51.222495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.916 [2024-11-20 08:21:51.222543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.916 [2024-11-20 08:21:51.239930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.916 [2024-11-20 08:21:51.239976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.916 [2024-11-20 08:21:51.256136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.916 [2024-11-20 08:21:51.256188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.916 [2024-11-20 08:21:51.271892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.916 [2024-11-20 08:21:51.271936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.916 [2024-11-20 08:21:51.289722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.916 [2024-11-20 08:21:51.289759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.916 [2024-11-20 08:21:51.300293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.916 [2024-11-20 08:21:51.300342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.916 [2024-11-20 08:21:51.313400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.916 [2024-11-20 08:21:51.313466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.916 [2024-11-20 08:21:51.322437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.916 [2024-11-20 08:21:51.322489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.916 [2024-11-20 08:21:51.337402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.916 [2024-11-20 08:21:51.337457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.916 [2024-11-20 08:21:51.356660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.916 [2024-11-20 08:21:51.356705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.916 [2024-11-20 08:21:51.370754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.916 [2024-11-20 08:21:51.370791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.916 [2024-11-20 08:21:51.386437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.916 [2024-11-20 08:21:51.386473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.916 [2024-11-20 08:21:51.402950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.916 [2024-11-20 08:21:51.402993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.916 [2024-11-20 08:21:51.420738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.916 [2024-11-20 08:21:51.420773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.916 [2024-11-20 08:21:51.435242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.916 [2024-11-20 08:21:51.435289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.916 [2024-11-20 08:21:51.452142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.916 [2024-11-20 08:21:51.452219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.916 [2024-11-20 08:21:51.467796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.916 [2024-11-20 08:21:51.467862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.175 [2024-11-20 08:21:51.483189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.175 [2024-11-20 08:21:51.483237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.175 [2024-11-20 08:21:51.502868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.175 [2024-11-20 08:21:51.502914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.175 [2024-11-20 08:21:51.517165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.175 [2024-11-20 08:21:51.517213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.175 [2024-11-20 08:21:51.526573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.175 [2024-11-20 08:21:51.526619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.175 [2024-11-20 08:21:51.541089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.175 [2024-11-20 08:21:51.541138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.175 [2024-11-20 08:21:51.557777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.175 [2024-11-20 08:21:51.557862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.175 [2024-11-20 08:21:51.568374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.175 [2024-11-20 08:21:51.568408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.175 [2024-11-20 08:21:51.582168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.175 [2024-11-20 08:21:51.582206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.175 [2024-11-20 08:21:51.596310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.175 [2024-11-20 08:21:51.596361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.175 [2024-11-20 08:21:51.611054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.175 [2024-11-20 08:21:51.611128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.175 [2024-11-20 08:21:51.628297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.175 [2024-11-20 08:21:51.628378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.175 [2024-11-20 08:21:51.643906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.175 [2024-11-20 08:21:51.643959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.175 [2024-11-20 08:21:51.659305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.175 [2024-11-20 08:21:51.659351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.175 [2024-11-20 08:21:51.677531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.175 [2024-11-20 08:21:51.677569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.175 [2024-11-20 08:21:51.691386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.175 [2024-11-20 08:21:51.691432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.175 [2024-11-20 08:21:51.706559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.175 [2024-11-20 08:21:51.706606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.175 [2024-11-20 08:21:51.718069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.175 [2024-11-20 08:21:51.718104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.433 [2024-11-20 08:21:51.734635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.433 [2024-11-20 08:21:51.734671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.433 11766.75 IOPS, 91.93 MiB/s [2024-11-20T08:21:51.994Z] [2024-11-20 08:21:51.751166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.433 [2024-11-20 08:21:51.751214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.433 [2024-11-20 08:21:51.767187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.433 [2024-11-20 08:21:51.767235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.433 [2024-11-20 08:21:51.785467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.433 [2024-11-20 08:21:51.785513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.433 [2024-11-20 08:21:51.800428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.433 [2024-11-20 08:21:51.800487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.433 [2024-11-20 08:21:51.811150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.433 [2024-11-20 08:21:51.811207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.433 [2024-11-20 08:21:51.826172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.433 [2024-11-20 08:21:51.826238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.433 [2024-11-20 08:21:51.841478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.433 [2024-11-20 08:21:51.841528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.433 [2024-11-20 08:21:51.858019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.433 [2024-11-20 08:21:51.858066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.433 [2024-11-20 08:21:51.873541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.433 [2024-11-20 08:21:51.873589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.433 [2024-11-20 08:21:51.891353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.433 [2024-11-20 08:21:51.891388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.433 [2024-11-20 08:21:51.907017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.433 [2024-11-20 08:21:51.907061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.433 [2024-11-20 08:21:51.924092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.433 [2024-11-20 08:21:51.924126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.433 [2024-11-20 08:21:51.940473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.433 [2024-11-20 08:21:51.940508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.433 [2024-11-20 08:21:51.956760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.433 [2024-11-20 08:21:51.956795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.433 [2024-11-20 08:21:51.973992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.433 [2024-11-20 08:21:51.974027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.433 [2024-11-20 08:21:51.990972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.433 [2024-11-20 08:21:51.991008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.691 [2024-11-20 08:21:52.007535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.691 [2024-11-20 08:21:52.007602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.691 [2024-11-20 08:21:52.023785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.691 [2024-11-20 08:21:52.023839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.691 [2024-11-20 08:21:52.040085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.691 [2024-11-20 08:21:52.040121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.691 [2024-11-20 08:21:52.056722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.691 [2024-11-20 08:21:52.056754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.691 [2024-11-20 08:21:52.073194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.691 [2024-11-20 08:21:52.073227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.691 [2024-11-20 08:21:52.089555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.691 [2024-11-20 08:21:52.089604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.691 [2024-11-20 08:21:52.106121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.691 [2024-11-20 08:21:52.106157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.691 [2024-11-20 08:21:52.122654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.691 [2024-11-20 08:21:52.122695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.691 [2024-11-20 08:21:52.139374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.691 [2024-11-20 08:21:52.139412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.691 [2024-11-20 08:21:52.155450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.691 [2024-11-20 08:21:52.155488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.691 [2024-11-20 08:21:52.173699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.691 [2024-11-20 08:21:52.173740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.691 [2024-11-20 08:21:52.190165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.691 [2024-11-20 08:21:52.190206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.691 [2024-11-20 08:21:52.206306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.691 [2024-11-20 08:21:52.206342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.691 [2024-11-20 08:21:52.223408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.691 [2024-11-20 08:21:52.223444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.691 [2024-11-20 08:21:52.239321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.691 [2024-11-20 08:21:52.239357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.691 [2024-11-20 08:21:52.249214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.691 [2024-11-20 08:21:52.249251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.949 [2024-11-20 08:21:52.266009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.950 [2024-11-20 08:21:52.266045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.950 [2024-11-20 08:21:52.277101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.950 [2024-11-20 08:21:52.277135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.950 [2024-11-20 08:21:52.292177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.950 [2024-11-20 08:21:52.292365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.950 [2024-11-20 08:21:52.308095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.950 [2024-11-20 08:21:52.308133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.950 [2024-11-20 08:21:52.317568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.950 [2024-11-20 08:21:52.317609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.950 [2024-11-20 08:21:52.332588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.950 [2024-11-20 08:21:52.332628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.950 [2024-11-20 08:21:52.343290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.950 [2024-11-20 08:21:52.343467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.950 [2024-11-20 08:21:52.358853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.950 [2024-11-20 08:21:52.359093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.950 [2024-11-20 08:21:52.375491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.950 [2024-11-20 08:21:52.375673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.950 [2024-11-20 08:21:52.394364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.950 [2024-11-20 08:21:52.394564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.950 [2024-11-20 08:21:52.410597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.950 [2024-11-20 08:21:52.410774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.950 [2024-11-20 08:21:52.425764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.950 [2024-11-20 08:21:52.425965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.950 [2024-11-20 08:21:52.444424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.950 [2024-11-20 08:21:52.444610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.950 [2024-11-20 08:21:52.459704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.950 [2024-11-20 08:21:52.459886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.950 [2024-11-20 08:21:52.469529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.950 [2024-11-20 08:21:52.469704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.950 [2024-11-20 08:21:52.485380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.950 [2024-11-20 08:21:52.485417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.950 [2024-11-20 08:21:52.501184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.950 [2024-11-20 08:21:52.501222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-11-20 08:21:52.511040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-11-20 08:21:52.511078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-11-20 08:21:52.527664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-11-20 08:21:52.527704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-11-20 08:21:52.543042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-11-20 08:21:52.543081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-11-20 08:21:52.559367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-11-20 08:21:52.559406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-11-20 08:21:52.575143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-11-20 08:21:52.575183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-11-20 08:21:52.584653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-11-20 08:21:52.584838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-11-20 08:21:52.600982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-11-20 08:21:52.601039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-11-20 08:21:52.617428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-11-20 08:21:52.617468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-11-20 08:21:52.635775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-11-20 08:21:52.635828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-11-20 08:21:52.650254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-11-20 08:21:52.650293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-11-20 08:21:52.666294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-11-20 08:21:52.666334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-11-20 08:21:52.683164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-11-20 08:21:52.683203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-11-20 08:21:52.693281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-11-20 08:21:52.693320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-11-20 08:21:52.705244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-11-20 08:21:52.705280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-11-20 08:21:52.716533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-11-20 08:21:52.716758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 [2024-11-20 08:21:52.733024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-11-20 08:21:52.733077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 11700.40 IOPS, 91.41 MiB/s [2024-11-20T08:21:52.769Z] [2024-11-20 08:21:52.747125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-11-20 08:21:52.747286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.208 00:09:05.208 Latency(us) 00:09:05.208 [2024-11-20T08:21:52.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.208 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:05.208 Nvme1n1 : 5.01 11699.49 91.40 0.00 0.00 10926.31 4081.11 19065.02 00:09:05.208 [2024-11-20T08:21:52.769Z] =================================================================================================================== 00:09:05.208 [2024-11-20T08:21:52.769Z] Total : 11699.49 91.40 0.00 0.00 10926.31 4081.11 19065.02 00:09:05.208 [2024-11-20 08:21:52.757405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.208 [2024-11-20 08:21:52.757563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-11-20 08:21:52.769411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-11-20 08:21:52.769564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-11-20 08:21:52.781403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-11-20 08:21:52.781546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-11-20 08:21:52.793407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-11-20 08:21:52.793549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-11-20 08:21:52.805403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-11-20 08:21:52.805570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-11-20 08:21:52.817412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-11-20 08:21:52.817446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-11-20 08:21:52.829413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-11-20 08:21:52.829447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-11-20 08:21:52.841415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-11-20 08:21:52.841447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-11-20 08:21:52.853414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-11-20 08:21:52.853459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-11-20 08:21:52.865411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-11-20 08:21:52.865455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-11-20 08:21:52.877431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-11-20 08:21:52.877460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-11-20 08:21:52.889415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-11-20 08:21:52.889443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-11-20 08:21:52.901434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-11-20 08:21:52.901463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-11-20 08:21:52.913421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-11-20 08:21:52.913463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-11-20 08:21:52.925421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-11-20 08:21:52.925465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-11-20 08:21:52.937425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-11-20 08:21:52.937453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-11-20 08:21:52.949425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-11-20 08:21:52.949453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-11-20 08:21:52.961428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-11-20 08:21:52.961460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-11-20 08:21:52.973448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-11-20 08:21:52.973476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-11-20 08:21:52.985445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-11-20 08:21:52.985471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-11-20 08:21:52.997457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-11-20 08:21:52.997488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 [2024-11-20 08:21:53.009468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.467 [2024-11-20 08:21:53.009495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.467 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65314) - No such process 00:09:05.467 08:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65314 00:09:05.467 08:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.467 08:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:05.467 08:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.726 08:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:05.726 08:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:05.726 08:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:05.726 08:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.726 delay0 00:09:05.726 08:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:05.726 08:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:05.726 08:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:05.726 08:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.726 08:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:05.726 08:21:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:09:05.726 [2024-11-20 08:21:53.221907] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:12.283 Initializing NVMe Controllers 00:09:12.283 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:12.283 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:12.283 Initialization complete. Launching workers. 00:09:12.283 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 271, failed: 14263 00:09:12.283 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 14442, failed to submit 92 00:09:12.283 success 14379, unsuccessful 63, failed 0 00:09:12.283 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:12.283 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:12.283 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:12.283 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:12.283 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:12.283 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:12.283 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:12.283 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:12.283 rmmod nvme_tcp 00:09:12.283 rmmod nvme_fabrics 00:09:12.283 rmmod nvme_keyring 00:09:12.283 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:12.283 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:12.283 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:12.283 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65171 ']' 00:09:12.283 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65171 00:09:12.283 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' -z 65171 ']' 00:09:12.283 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@961 -- # kill -0 65171 00:09:12.283 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # uname 00:09:12.283 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:09:12.284 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 65171 00:09:12.284 killing process with pid 65171 00:09:12.284 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:09:12.284 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:09:12.284 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@975 -- # echo 'killing process with pid 65171' 00:09:12.284 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # kill 65171 00:09:12.284 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@981 -- # wait 65171 00:09:12.542 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:12.542 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:12.542 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:12.542 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:12.542 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:12.542 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:12.542 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:12.542 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:12.542 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:12.542 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:12.542 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:12.542 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:12.542 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:12.542 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:12.542 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:12.543 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:12.543 08:21:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:12.543 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:12.543 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:12.543 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:12.543 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:12.801 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:12.801 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:12.801 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.801 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.801 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.801 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:09:12.801 00:09:12.801 real 0m24.964s 00:09:12.802 user 0m39.655s 00:09:12.802 sys 0m7.846s 00:09:12.802 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1133 -- # xtrace_disable 00:09:12.802 ************************************ 00:09:12.802 END TEST nvmf_zcopy 00:09:12.802 ************************************ 00:09:12.802 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.802 08:22:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:12.802 08:22:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:09:12.802 08:22:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1114 -- # xtrace_disable 00:09:12.802 08:22:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:12.802 ************************************ 00:09:12.802 START TEST nvmf_nmic 00:09:12.802 ************************************ 00:09:12.802 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:12.802 * Looking for test storage... 00:09:12.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:12.802 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:09:12.802 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1638 -- # lcov --version 00:09:12.802 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:09:13.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.062 --rc genhtml_branch_coverage=1 00:09:13.062 --rc genhtml_function_coverage=1 00:09:13.062 --rc genhtml_legend=1 00:09:13.062 --rc geninfo_all_blocks=1 00:09:13.062 --rc geninfo_unexecuted_blocks=1 00:09:13.062 00:09:13.062 ' 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:09:13.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.062 --rc genhtml_branch_coverage=1 00:09:13.062 --rc genhtml_function_coverage=1 00:09:13.062 --rc genhtml_legend=1 00:09:13.062 --rc geninfo_all_blocks=1 00:09:13.062 --rc geninfo_unexecuted_blocks=1 00:09:13.062 00:09:13.062 ' 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:09:13.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.062 --rc genhtml_branch_coverage=1 00:09:13.062 --rc genhtml_function_coverage=1 00:09:13.062 --rc genhtml_legend=1 00:09:13.062 --rc geninfo_all_blocks=1 00:09:13.062 --rc geninfo_unexecuted_blocks=1 00:09:13.062 00:09:13.062 ' 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:09:13.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.062 --rc genhtml_branch_coverage=1 00:09:13.062 --rc genhtml_function_coverage=1 00:09:13.062 --rc genhtml_legend=1 00:09:13.062 --rc geninfo_all_blocks=1 00:09:13.062 --rc geninfo_unexecuted_blocks=1 00:09:13.062 00:09:13.062 ' 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.062 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:13.063 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:13.063 Cannot find device "nvmf_init_br" 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:13.063 Cannot find device "nvmf_init_br2" 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:13.063 Cannot find device "nvmf_tgt_br" 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:13.063 Cannot find device "nvmf_tgt_br2" 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:13.063 Cannot find device "nvmf_init_br" 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:13.063 Cannot find device "nvmf_init_br2" 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:13.063 Cannot find device "nvmf_tgt_br" 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:13.063 Cannot find device "nvmf_tgt_br2" 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:13.063 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:13.322 Cannot find device "nvmf_br" 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:13.322 Cannot find device "nvmf_init_if" 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:13.322 Cannot find device "nvmf_init_if2" 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:13.322 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:13.322 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:13.322 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:13.581 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:13.581 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:09:13.581 00:09:13.581 --- 10.0.0.3 ping statistics --- 00:09:13.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.581 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:13.581 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:13.581 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.109 ms 00:09:13.581 00:09:13.581 --- 10.0.0.4 ping statistics --- 00:09:13.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.581 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:13.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:09:13.581 00:09:13.581 --- 10.0.0.1 ping statistics --- 00:09:13.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.581 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:13.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:09:13.581 00:09:13.581 --- 10.0.0.2 ping statistics --- 00:09:13.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.581 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65711 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65711 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # '[' -z 65711 ']' 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@843 -- # local max_retries=100 00:09:13.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@847 -- # xtrace_disable 00:09:13.581 08:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.581 [2024-11-20 08:22:01.010994] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:09:13.581 [2024-11-20 08:22:01.011440] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.840 [2024-11-20 08:22:01.172624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:13.840 [2024-11-20 08:22:01.249552] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.840 [2024-11-20 08:22:01.249874] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.840 [2024-11-20 08:22:01.250129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.840 [2024-11-20 08:22:01.250328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.840 [2024-11-20 08:22:01.250436] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.840 [2024-11-20 08:22:01.251872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.840 [2024-11-20 08:22:01.252008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:13.840 [2024-11-20 08:22:01.252093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.840 [2024-11-20 08:22:01.252095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.840 [2024-11-20 08:22:01.312815] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@871 -- # return 0 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@735 -- # xtrace_disable 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.774 [2024-11-20 08:22:02.071358] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.774 Malloc0 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.774 [2024-11-20 08:22:02.156013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:14.774 test case1: single bdev can't be used in multiple subsystems 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.774 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:14.775 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:14.775 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:14.775 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:14.775 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.775 [2024-11-20 08:22:02.179851] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:14.775 [2024-11-20 08:22:02.179904] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:14.775 [2024-11-20 08:22:02.179923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.775 request: 00:09:14.775 { 00:09:14.775 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:14.775 "namespace": { 00:09:14.775 "bdev_name": "Malloc0", 00:09:14.775 "no_auto_visible": false 00:09:14.775 }, 00:09:14.775 "method": "nvmf_subsystem_add_ns", 00:09:14.775 "req_id": 1 00:09:14.775 } 00:09:14.775 Got JSON-RPC error response 00:09:14.775 response: 00:09:14.775 { 00:09:14.775 "code": -32602, 00:09:14.775 "message": "Invalid parameters" 00:09:14.775 } 00:09:14.775 Adding namespace failed - expected result. 00:09:14.775 test case2: host connect to nvmf target in multiple paths 00:09:14.775 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@594 -- # [[ 1 == 0 ]] 00:09:14.775 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:14.775 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:14.775 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:14.775 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:14.775 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:14.775 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:14.775 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.775 [2024-11-20 08:22:02.191969] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:14.775 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:14.775 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:15.034 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:15.034 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:15.034 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # local i=0 00:09:15.034 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # local nvme_device_counter=1 nvme_devices=0 00:09:15.034 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # [[ -n '' ]] 00:09:15.034 08:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # sleep 2 00:09:16.939 08:22:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1213 -- # (( i++ <= 15 )) 00:09:16.939 08:22:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1214 -- # lsblk -l -o NAME,SERIAL 00:09:16.939 08:22:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1214 -- # grep -c SPDKISFASTANDAWESOME 00:09:16.939 08:22:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1214 -- # nvme_devices=1 00:09:16.939 08:22:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1215 -- # (( nvme_devices == nvme_device_counter )) 00:09:16.939 08:22:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1215 -- # return 0 00:09:16.939 08:22:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:17.198 [global] 00:09:17.198 thread=1 00:09:17.198 invalidate=1 00:09:17.198 rw=write 00:09:17.198 time_based=1 00:09:17.198 runtime=1 00:09:17.198 ioengine=libaio 00:09:17.198 direct=1 00:09:17.198 bs=4096 00:09:17.198 iodepth=1 00:09:17.198 norandommap=0 00:09:17.198 numjobs=1 00:09:17.198 00:09:17.198 verify_dump=1 00:09:17.198 verify_backlog=512 00:09:17.198 verify_state_save=0 00:09:17.198 do_verify=1 00:09:17.198 verify=crc32c-intel 00:09:17.198 [job0] 00:09:17.198 filename=/dev/nvme0n1 00:09:17.198 Could not set queue depth (nvme0n1) 00:09:17.198 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:17.198 fio-3.35 00:09:17.198 Starting 1 thread 00:09:18.577 00:09:18.577 job0: (groupid=0, jobs=1): err= 0: pid=65803: Wed Nov 20 08:22:05 2024 00:09:18.577 read: IOPS=2350, BW=9403KiB/s (9628kB/s)(9412KiB/1001msec) 00:09:18.577 slat (nsec): min=11816, max=43442, avg=14988.12, stdev=3363.10 00:09:18.577 clat (usec): min=152, max=786, avg=229.93, stdev=39.26 00:09:18.577 lat (usec): min=165, max=800, avg=244.92, stdev=40.07 00:09:18.577 clat percentiles (usec): 00:09:18.577 | 1.00th=[ 163], 5.00th=[ 178], 10.00th=[ 188], 20.00th=[ 200], 00:09:18.577 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 227], 60.00th=[ 235], 00:09:18.577 | 70.00th=[ 243], 80.00th=[ 255], 90.00th=[ 273], 95.00th=[ 289], 00:09:18.577 | 99.00th=[ 371], 99.50th=[ 412], 99.90th=[ 461], 99.95th=[ 478], 00:09:18.577 | 99.99th=[ 783] 00:09:18.577 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:18.577 slat (usec): min=16, max=122, avg=20.67, stdev= 5.62 00:09:18.577 clat (usec): min=90, max=476, avg=141.72, stdev=24.09 00:09:18.577 lat (usec): min=108, max=598, avg=162.40, stdev=26.52 00:09:18.577 clat percentiles (usec): 00:09:18.577 | 1.00th=[ 98], 5.00th=[ 106], 10.00th=[ 112], 20.00th=[ 122], 00:09:18.577 | 30.00th=[ 128], 40.00th=[ 135], 50.00th=[ 141], 60.00th=[ 147], 00:09:18.577 | 70.00th=[ 153], 80.00th=[ 161], 90.00th=[ 172], 95.00th=[ 182], 00:09:18.577 | 99.00th=[ 204], 99.50th=[ 210], 99.90th=[ 235], 99.95th=[ 241], 00:09:18.577 | 99.99th=[ 478] 00:09:18.577 bw ( KiB/s): min=10512, max=10512, per=100.00%, avg=10512.00, stdev= 0.00, samples=1 00:09:18.577 iops : min= 2628, max= 2628, avg=2628.00, stdev= 0.00, samples=1 00:09:18.577 lat (usec) : 100=0.77%, 250=87.71%, 500=11.50%, 1000=0.02% 00:09:18.577 cpu : usr=1.60%, sys=7.00%, ctx=4913, majf=0, minf=5 00:09:18.577 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:18.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.577 issued rwts: total=2353,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.577 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:18.577 00:09:18.577 Run status group 0 (all jobs): 00:09:18.577 READ: bw=9403KiB/s (9628kB/s), 9403KiB/s-9403KiB/s (9628kB/s-9628kB/s), io=9412KiB (9638kB), run=1001-1001msec 00:09:18.577 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:09:18.577 00:09:18.577 Disk stats (read/write): 00:09:18.577 nvme0n1: ios=2097/2314, merge=0/0, ticks=516/356, in_queue=872, util=91.26% 00:09:18.577 08:22:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:18.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:18.577 08:22:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:18.577 08:22:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1226 -- # local i=0 00:09:18.577 08:22:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -o NAME,SERIAL 00:09:18.577 08:22:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.577 08:22:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1234 -- # lsblk -l -o NAME,SERIAL 00:09:18.577 08:22:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1234 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.577 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1238 -- # return 0 00:09:18.577 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:18.577 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:18.577 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:18.577 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:18.577 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:18.577 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:18.577 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:18.577 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:18.577 rmmod nvme_tcp 00:09:18.577 rmmod nvme_fabrics 00:09:18.577 rmmod nvme_keyring 00:09:18.577 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:18.577 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:18.577 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:18.577 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65711 ']' 00:09:18.577 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65711 00:09:18.577 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' -z 65711 ']' 00:09:18.577 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@961 -- # kill -0 65711 00:09:18.577 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # uname 00:09:18.577 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:09:18.577 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 65711 00:09:18.836 killing process with pid 65711 00:09:18.836 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:09:18.836 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:09:18.836 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@975 -- # echo 'killing process with pid 65711' 00:09:18.837 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # kill 65711 00:09:18.837 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@981 -- # wait 65711 00:09:19.096 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.096 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:19.096 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:19.096 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:19.096 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:19.096 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:19.096 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:19.096 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.096 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:19.096 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:19.096 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:19.096 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:19.096 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:19.096 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:19.096 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:19.096 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:19.096 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:19.096 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:19.096 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:19.096 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:19.096 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:19.355 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:19.355 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:19.355 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.355 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.355 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.355 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:19.355 00:09:19.355 real 0m6.490s 00:09:19.355 user 0m20.069s 00:09:19.355 sys 0m2.097s 00:09:19.355 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1133 -- # xtrace_disable 00:09:19.355 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:19.355 ************************************ 00:09:19.355 END TEST nvmf_nmic 00:09:19.355 ************************************ 00:09:19.355 08:22:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:19.355 08:22:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:09:19.355 08:22:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1114 -- # xtrace_disable 00:09:19.355 08:22:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:19.355 ************************************ 00:09:19.355 START TEST nvmf_fio_target 00:09:19.355 ************************************ 00:09:19.355 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:19.355 * Looking for test storage... 00:09:19.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:19.355 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:09:19.355 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1638 -- # lcov --version 00:09:19.355 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:09:19.614 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:09:19.614 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.614 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.614 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.614 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.614 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.614 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.614 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.614 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.614 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.614 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.614 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.614 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:19.614 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:19.614 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.614 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.614 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:19.614 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:19.614 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.614 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:19.614 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.614 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:19.614 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:09:19.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.615 --rc genhtml_branch_coverage=1 00:09:19.615 --rc genhtml_function_coverage=1 00:09:19.615 --rc genhtml_legend=1 00:09:19.615 --rc geninfo_all_blocks=1 00:09:19.615 --rc geninfo_unexecuted_blocks=1 00:09:19.615 00:09:19.615 ' 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:09:19.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.615 --rc genhtml_branch_coverage=1 00:09:19.615 --rc genhtml_function_coverage=1 00:09:19.615 --rc genhtml_legend=1 00:09:19.615 --rc geninfo_all_blocks=1 00:09:19.615 --rc geninfo_unexecuted_blocks=1 00:09:19.615 00:09:19.615 ' 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:09:19.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.615 --rc genhtml_branch_coverage=1 00:09:19.615 --rc genhtml_function_coverage=1 00:09:19.615 --rc genhtml_legend=1 00:09:19.615 --rc geninfo_all_blocks=1 00:09:19.615 --rc geninfo_unexecuted_blocks=1 00:09:19.615 00:09:19.615 ' 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:09:19.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.615 --rc genhtml_branch_coverage=1 00:09:19.615 --rc genhtml_function_coverage=1 00:09:19.615 --rc genhtml_legend=1 00:09:19.615 --rc geninfo_all_blocks=1 00:09:19.615 --rc geninfo_unexecuted_blocks=1 00:09:19.615 00:09:19.615 ' 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:19.615 08:22:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:19.615 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:19.615 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:19.616 Cannot find device "nvmf_init_br" 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:19.616 Cannot find device "nvmf_init_br2" 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:19.616 Cannot find device "nvmf_tgt_br" 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:19.616 Cannot find device "nvmf_tgt_br2" 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:19.616 Cannot find device "nvmf_init_br" 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:19.616 Cannot find device "nvmf_init_br2" 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:19.616 Cannot find device "nvmf_tgt_br" 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:19.616 Cannot find device "nvmf_tgt_br2" 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:19.616 Cannot find device "nvmf_br" 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:19.616 Cannot find device "nvmf_init_if" 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:19.616 Cannot find device "nvmf_init_if2" 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:19.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:19.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:19.616 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:19.877 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:19.877 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:09:19.877 00:09:19.877 --- 10.0.0.3 ping statistics --- 00:09:19.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.877 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:19.877 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:19.877 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:09:19.877 00:09:19.877 --- 10.0.0.4 ping statistics --- 00:09:19.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.877 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:19.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:19.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:09:19.877 00:09:19.877 --- 10.0.0.1 ping statistics --- 00:09:19.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.877 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:19.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:19.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:09:19.877 00:09:19.877 --- 10.0.0.2 ping statistics --- 00:09:19.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.877 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:19.877 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:20.139 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:20.139 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:20.139 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:20.139 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:20.139 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66047 00:09:20.139 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:20.139 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66047 00:09:20.139 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # '[' -z 66047 ']' 00:09:20.139 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.139 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@843 -- # local max_retries=100 00:09:20.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.139 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.139 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@847 -- # xtrace_disable 00:09:20.139 08:22:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:20.139 [2024-11-20 08:22:07.512587] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:09:20.139 [2024-11-20 08:22:07.512693] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.139 [2024-11-20 08:22:07.669020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:20.397 [2024-11-20 08:22:07.755123] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.397 [2024-11-20 08:22:07.755199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.397 [2024-11-20 08:22:07.755214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.397 [2024-11-20 08:22:07.755225] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.397 [2024-11-20 08:22:07.755235] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.397 [2024-11-20 08:22:07.756746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.397 [2024-11-20 08:22:07.756883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.397 [2024-11-20 08:22:07.756952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.397 [2024-11-20 08:22:07.756954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.397 [2024-11-20 08:22:07.831363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:21.333 08:22:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:09:21.333 08:22:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@871 -- # return 0 00:09:21.333 08:22:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:21.333 08:22:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@735 -- # xtrace_disable 00:09:21.333 08:22:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:21.333 08:22:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.333 08:22:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:21.333 [2024-11-20 08:22:08.815792] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.333 08:22:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:21.592 08:22:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:21.592 08:22:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:22.160 08:22:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:22.160 08:22:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:22.418 08:22:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:22.418 08:22:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:22.677 08:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:22.677 08:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:22.936 08:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:23.194 08:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:23.194 08:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:23.761 08:22:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:23.761 08:22:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:24.019 08:22:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:24.019 08:22:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:24.278 08:22:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:24.536 08:22:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:24.536 08:22:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:24.794 08:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:24.794 08:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:25.052 08:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:25.310 [2024-11-20 08:22:12.677331] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:25.310 08:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:25.569 08:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:25.827 08:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:25.827 08:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:25.827 08:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # local i=0 00:09:25.827 08:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # local nvme_device_counter=1 nvme_devices=0 00:09:25.827 08:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # [[ -n 4 ]] 00:09:25.827 08:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # nvme_device_counter=4 00:09:25.827 08:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # sleep 2 00:09:28.366 08:22:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1213 -- # (( i++ <= 15 )) 00:09:28.366 08:22:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1214 -- # lsblk -l -o NAME,SERIAL 00:09:28.366 08:22:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1214 -- # grep -c SPDKISFASTANDAWESOME 00:09:28.366 08:22:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1214 -- # nvme_devices=4 00:09:28.366 08:22:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1215 -- # (( nvme_devices == nvme_device_counter )) 00:09:28.366 08:22:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1215 -- # return 0 00:09:28.366 08:22:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:28.366 [global] 00:09:28.366 thread=1 00:09:28.366 invalidate=1 00:09:28.366 rw=write 00:09:28.366 time_based=1 00:09:28.366 runtime=1 00:09:28.366 ioengine=libaio 00:09:28.366 direct=1 00:09:28.366 bs=4096 00:09:28.366 iodepth=1 00:09:28.366 norandommap=0 00:09:28.366 numjobs=1 00:09:28.366 00:09:28.366 verify_dump=1 00:09:28.366 verify_backlog=512 00:09:28.366 verify_state_save=0 00:09:28.366 do_verify=1 00:09:28.366 verify=crc32c-intel 00:09:28.366 [job0] 00:09:28.366 filename=/dev/nvme0n1 00:09:28.366 [job1] 00:09:28.366 filename=/dev/nvme0n2 00:09:28.366 [job2] 00:09:28.366 filename=/dev/nvme0n3 00:09:28.366 [job3] 00:09:28.366 filename=/dev/nvme0n4 00:09:28.366 Could not set queue depth (nvme0n1) 00:09:28.366 Could not set queue depth (nvme0n2) 00:09:28.366 Could not set queue depth (nvme0n3) 00:09:28.366 Could not set queue depth (nvme0n4) 00:09:28.366 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:28.366 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:28.366 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:28.366 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:28.366 fio-3.35 00:09:28.366 Starting 4 threads 00:09:29.302 00:09:29.302 job0: (groupid=0, jobs=1): err= 0: pid=66232: Wed Nov 20 08:22:16 2024 00:09:29.302 read: IOPS=1205, BW=4823KiB/s (4939kB/s)(4828KiB/1001msec) 00:09:29.302 slat (nsec): min=10204, max=56447, avg=20383.35, stdev=6442.36 00:09:29.302 clat (usec): min=163, max=5889, avg=436.38, stdev=212.70 00:09:29.302 lat (usec): min=193, max=5921, avg=456.77, stdev=212.16 00:09:29.302 clat percentiles (usec): 00:09:29.302 | 1.00th=[ 192], 5.00th=[ 237], 10.00th=[ 338], 20.00th=[ 379], 00:09:29.302 | 30.00th=[ 400], 40.00th=[ 416], 50.00th=[ 429], 60.00th=[ 445], 00:09:29.302 | 70.00th=[ 457], 80.00th=[ 478], 90.00th=[ 515], 95.00th=[ 562], 00:09:29.302 | 99.00th=[ 734], 99.50th=[ 922], 99.90th=[ 3458], 99.95th=[ 5866], 00:09:29.302 | 99.99th=[ 5866] 00:09:29.302 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:29.303 slat (usec): min=15, max=198, avg=27.51, stdev= 9.86 00:09:29.303 clat (usec): min=115, max=4022, avg=260.40, stdev=144.02 00:09:29.303 lat (usec): min=157, max=4091, avg=287.91, stdev=143.05 00:09:29.303 clat percentiles (usec): 00:09:29.303 | 1.00th=[ 141], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 180], 00:09:29.303 | 30.00th=[ 200], 40.00th=[ 239], 50.00th=[ 260], 60.00th=[ 281], 00:09:29.303 | 70.00th=[ 302], 80.00th=[ 314], 90.00th=[ 338], 95.00th=[ 367], 00:09:29.303 | 99.00th=[ 424], 99.50th=[ 453], 99.90th=[ 3326], 99.95th=[ 4015], 00:09:29.303 | 99.99th=[ 4015] 00:09:29.303 bw ( KiB/s): min= 7704, max= 7704, per=31.38%, avg=7704.00, stdev= 0.00, samples=1 00:09:29.303 iops : min= 1926, max= 1926, avg=1926.00, stdev= 0.00, samples=1 00:09:29.303 lat (usec) : 250=27.89%, 500=66.68%, 750=4.92%, 1000=0.26% 00:09:29.303 lat (msec) : 2=0.07%, 4=0.11%, 10=0.07% 00:09:29.303 cpu : usr=1.70%, sys=5.50%, ctx=2743, majf=0, minf=9 00:09:29.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.303 issued rwts: total=1207,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.303 job1: (groupid=0, jobs=1): err= 0: pid=66233: Wed Nov 20 08:22:16 2024 00:09:29.303 read: IOPS=1156, BW=4627KiB/s (4738kB/s)(4632KiB/1001msec) 00:09:29.303 slat (nsec): min=10347, max=86264, avg=19531.35, stdev=7763.62 00:09:29.303 clat (usec): min=227, max=902, avg=438.15, stdev=72.06 00:09:29.303 lat (usec): min=245, max=925, avg=457.68, stdev=74.03 00:09:29.303 clat percentiles (usec): 00:09:29.303 | 1.00th=[ 281], 5.00th=[ 330], 10.00th=[ 367], 20.00th=[ 392], 00:09:29.303 | 30.00th=[ 408], 40.00th=[ 420], 50.00th=[ 433], 60.00th=[ 445], 00:09:29.303 | 70.00th=[ 457], 80.00th=[ 474], 90.00th=[ 519], 95.00th=[ 570], 00:09:29.303 | 99.00th=[ 685], 99.50th=[ 725], 99.90th=[ 889], 99.95th=[ 906], 00:09:29.303 | 99.99th=[ 906] 00:09:29.303 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:29.303 slat (usec): min=17, max=153, avg=30.36, stdev= 6.96 00:09:29.303 clat (usec): min=119, max=973, avg=271.44, stdev=63.46 00:09:29.303 lat (usec): min=144, max=1010, avg=301.80, stdev=64.42 00:09:29.303 clat percentiles (usec): 00:09:29.303 | 1.00th=[ 141], 5.00th=[ 161], 10.00th=[ 184], 20.00th=[ 225], 00:09:29.303 | 30.00th=[ 245], 40.00th=[ 258], 50.00th=[ 273], 60.00th=[ 289], 00:09:29.303 | 70.00th=[ 302], 80.00th=[ 318], 90.00th=[ 343], 95.00th=[ 371], 00:09:29.303 | 99.00th=[ 408], 99.50th=[ 433], 99.90th=[ 668], 99.95th=[ 971], 00:09:29.303 | 99.99th=[ 971] 00:09:29.303 bw ( KiB/s): min= 6944, max= 6944, per=28.28%, avg=6944.00, stdev= 0.00, samples=1 00:09:29.303 iops : min= 1736, max= 1736, avg=1736.00, stdev= 0.00, samples=1 00:09:29.303 lat (usec) : 250=19.23%, 500=75.06%, 750=5.53%, 1000=0.19% 00:09:29.303 cpu : usr=2.30%, sys=5.30%, ctx=2694, majf=0, minf=12 00:09:29.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.303 issued rwts: total=1158,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.303 job2: (groupid=0, jobs=1): err= 0: pid=66235: Wed Nov 20 08:22:16 2024 00:09:29.303 read: IOPS=2017, BW=8072KiB/s (8266kB/s)(8080KiB/1001msec) 00:09:29.303 slat (nsec): min=12491, max=82217, avg=18139.50, stdev=6805.59 00:09:29.303 clat (usec): min=183, max=526, avg=255.94, stdev=27.71 00:09:29.303 lat (usec): min=196, max=551, avg=274.08, stdev=29.55 00:09:29.303 clat percentiles (usec): 00:09:29.303 | 1.00th=[ 206], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 233], 00:09:29.303 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 260], 00:09:29.303 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 306], 00:09:29.303 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 404], 99.95th=[ 469], 00:09:29.303 | 99.99th=[ 529] 00:09:29.303 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:29.303 slat (usec): min=14, max=196, avg=25.28, stdev= 8.98 00:09:29.303 clat (usec): min=126, max=1021, avg=188.84, stdev=31.91 00:09:29.303 lat (usec): min=145, max=1060, avg=214.12, stdev=34.29 00:09:29.303 clat percentiles (usec): 00:09:29.303 | 1.00th=[ 139], 5.00th=[ 151], 10.00th=[ 159], 20.00th=[ 169], 00:09:29.303 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 192], 00:09:29.303 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 221], 95.00th=[ 233], 00:09:29.303 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 359], 99.95th=[ 562], 00:09:29.303 | 99.99th=[ 1020] 00:09:29.303 bw ( KiB/s): min= 8192, max= 8192, per=33.37%, avg=8192.00, stdev= 0.00, samples=1 00:09:29.303 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:29.303 lat (usec) : 250=72.07%, 500=27.85%, 750=0.05% 00:09:29.303 lat (msec) : 2=0.02% 00:09:29.303 cpu : usr=1.90%, sys=7.00%, ctx=4069, majf=0, minf=7 00:09:29.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.303 issued rwts: total=2020,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.303 job3: (groupid=0, jobs=1): err= 0: pid=66240: Wed Nov 20 08:22:16 2024 00:09:29.303 read: IOPS=963, BW=3852KiB/s (3945kB/s)(3856KiB/1001msec) 00:09:29.303 slat (usec): min=19, max=103, avg=41.77, stdev=12.78 00:09:29.303 clat (usec): min=297, max=1111, avg=540.38, stdev=128.42 00:09:29.303 lat (usec): min=342, max=1147, avg=582.16, stdev=132.02 00:09:29.303 clat percentiles (usec): 00:09:29.303 | 1.00th=[ 322], 5.00th=[ 359], 10.00th=[ 388], 20.00th=[ 424], 00:09:29.303 | 30.00th=[ 449], 40.00th=[ 478], 50.00th=[ 529], 60.00th=[ 570], 00:09:29.303 | 70.00th=[ 611], 80.00th=[ 652], 90.00th=[ 717], 95.00th=[ 775], 00:09:29.303 | 99.00th=[ 848], 99.50th=[ 857], 99.90th=[ 1106], 99.95th=[ 1106], 00:09:29.303 | 99.99th=[ 1106] 00:09:29.303 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:29.303 slat (usec): min=25, max=197, avg=45.35, stdev=12.53 00:09:29.303 clat (usec): min=134, max=959, avg=374.57, stdev=100.86 00:09:29.303 lat (usec): min=163, max=1002, avg=419.92, stdev=104.78 00:09:29.303 clat percentiles (usec): 00:09:29.303 | 1.00th=[ 153], 5.00th=[ 174], 10.00th=[ 245], 20.00th=[ 297], 00:09:29.303 | 30.00th=[ 330], 40.00th=[ 363], 50.00th=[ 388], 60.00th=[ 400], 00:09:29.303 | 70.00th=[ 420], 80.00th=[ 449], 90.00th=[ 498], 95.00th=[ 537], 00:09:29.303 | 99.00th=[ 594], 99.50th=[ 619], 99.90th=[ 693], 99.95th=[ 963], 00:09:29.303 | 99.99th=[ 963] 00:09:29.303 bw ( KiB/s): min= 4192, max= 4192, per=17.07%, avg=4192.00, stdev= 0.00, samples=1 00:09:29.303 iops : min= 1048, max= 1048, avg=1048.00, stdev= 0.00, samples=1 00:09:29.303 lat (usec) : 250=5.63%, 500=61.87%, 750=28.77%, 1000=3.67% 00:09:29.303 lat (msec) : 2=0.05% 00:09:29.303 cpu : usr=1.80%, sys=7.20%, ctx=1988, majf=0, minf=15 00:09:29.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.303 issued rwts: total=964,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.303 00:09:29.303 Run status group 0 (all jobs): 00:09:29.303 READ: bw=20.9MiB/s (21.9MB/s), 3852KiB/s-8072KiB/s (3945kB/s-8266kB/s), io=20.9MiB (21.9MB), run=1001-1001msec 00:09:29.303 WRITE: bw=24.0MiB/s (25.1MB/s), 4092KiB/s-8184KiB/s (4190kB/s-8380kB/s), io=24.0MiB (25.2MB), run=1001-1001msec 00:09:29.303 00:09:29.303 Disk stats (read/write): 00:09:29.303 nvme0n1: ios=1074/1322, merge=0/0, ticks=440/325, in_queue=765, util=86.36% 00:09:29.303 nvme0n2: ios=1052/1232, merge=0/0, ticks=457/339, in_queue=796, util=87.17% 00:09:29.303 nvme0n3: ios=1536/1936, merge=0/0, ticks=397/396, in_queue=793, util=89.10% 00:09:29.303 nvme0n4: ios=708/1024, merge=0/0, ticks=390/404, in_queue=794, util=89.66% 00:09:29.303 08:22:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:29.303 [global] 00:09:29.303 thread=1 00:09:29.303 invalidate=1 00:09:29.303 rw=randwrite 00:09:29.303 time_based=1 00:09:29.303 runtime=1 00:09:29.303 ioengine=libaio 00:09:29.303 direct=1 00:09:29.303 bs=4096 00:09:29.303 iodepth=1 00:09:29.303 norandommap=0 00:09:29.303 numjobs=1 00:09:29.303 00:09:29.303 verify_dump=1 00:09:29.303 verify_backlog=512 00:09:29.303 verify_state_save=0 00:09:29.303 do_verify=1 00:09:29.303 verify=crc32c-intel 00:09:29.303 [job0] 00:09:29.303 filename=/dev/nvme0n1 00:09:29.303 [job1] 00:09:29.303 filename=/dev/nvme0n2 00:09:29.304 [job2] 00:09:29.304 filename=/dev/nvme0n3 00:09:29.304 [job3] 00:09:29.304 filename=/dev/nvme0n4 00:09:29.304 Could not set queue depth (nvme0n1) 00:09:29.304 Could not set queue depth (nvme0n2) 00:09:29.304 Could not set queue depth (nvme0n3) 00:09:29.304 Could not set queue depth (nvme0n4) 00:09:29.561 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.561 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.561 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.561 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.561 fio-3.35 00:09:29.561 Starting 4 threads 00:09:30.939 00:09:30.939 job0: (groupid=0, jobs=1): err= 0: pid=66298: Wed Nov 20 08:22:18 2024 00:09:30.939 read: IOPS=2073, BW=8296KiB/s (8495kB/s)(8304KiB/1001msec) 00:09:30.939 slat (nsec): min=11602, max=47170, avg=14481.37, stdev=2612.90 00:09:30.939 clat (usec): min=148, max=506, avg=219.91, stdev=27.88 00:09:30.939 lat (usec): min=163, max=528, avg=234.39, stdev=28.17 00:09:30.939 clat percentiles (usec): 00:09:30.939 | 1.00th=[ 167], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 200], 00:09:30.939 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:09:30.939 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 255], 95.00th=[ 265], 00:09:30.939 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 429], 99.95th=[ 502], 00:09:30.939 | 99.99th=[ 506] 00:09:30.939 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:30.939 slat (usec): min=14, max=162, avg=22.20, stdev= 6.11 00:09:30.939 clat (usec): min=108, max=2643, avg=175.38, stdev=56.12 00:09:30.939 lat (usec): min=129, max=2670, avg=197.59, stdev=57.01 00:09:30.939 clat percentiles (usec): 00:09:30.939 | 1.00th=[ 123], 5.00th=[ 135], 10.00th=[ 145], 20.00th=[ 153], 00:09:30.939 | 30.00th=[ 159], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 180], 00:09:30.939 | 70.00th=[ 186], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 219], 00:09:30.939 | 99.00th=[ 258], 99.50th=[ 281], 99.90th=[ 392], 99.95th=[ 529], 00:09:30.939 | 99.99th=[ 2638] 00:09:30.939 bw ( KiB/s): min= 9928, max= 9928, per=24.84%, avg=9928.00, stdev= 0.00, samples=1 00:09:30.939 iops : min= 2482, max= 2482, avg=2482.00, stdev= 0.00, samples=1 00:09:30.939 lat (usec) : 250=93.81%, 500=6.10%, 750=0.06% 00:09:30.939 lat (msec) : 4=0.02% 00:09:30.939 cpu : usr=1.60%, sys=7.00%, ctx=4636, majf=0, minf=13 00:09:30.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.939 issued rwts: total=2076,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.939 job1: (groupid=0, jobs=1): err= 0: pid=66299: Wed Nov 20 08:22:18 2024 00:09:30.939 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:30.939 slat (nsec): min=11613, max=41305, avg=13290.42, stdev=2327.95 00:09:30.939 clat (usec): min=169, max=962, avg=224.06, stdev=26.40 00:09:30.939 lat (usec): min=181, max=975, avg=237.35, stdev=26.55 00:09:30.939 clat percentiles (usec): 00:09:30.939 | 1.00th=[ 184], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 208], 00:09:30.939 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:09:30.939 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 260], 00:09:30.939 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 338], 99.95th=[ 553], 00:09:30.939 | 99.99th=[ 963] 00:09:30.939 write: IOPS=2538, BW=9.92MiB/s (10.4MB/s)(9.93MiB/1001msec); 0 zone resets 00:09:30.939 slat (usec): min=14, max=156, avg=19.14, stdev= 4.63 00:09:30.939 clat (usec): min=131, max=2455, avg=180.08, stdev=50.40 00:09:30.939 lat (usec): min=149, max=2490, avg=199.23, stdev=51.13 00:09:30.939 clat percentiles (usec): 00:09:30.939 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 163], 00:09:30.939 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:09:30.939 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 206], 95.00th=[ 212], 00:09:30.939 | 99.00th=[ 241], 99.50th=[ 251], 99.90th=[ 289], 99.95th=[ 717], 00:09:30.939 | 99.99th=[ 2442] 00:09:30.939 bw ( KiB/s): min=10208, max=10208, per=25.54%, avg=10208.00, stdev= 0.00, samples=1 00:09:30.939 iops : min= 2552, max= 2552, avg=2552.00, stdev= 0.00, samples=1 00:09:30.939 lat (usec) : 250=95.60%, 500=4.31%, 750=0.04%, 1000=0.02% 00:09:30.939 lat (msec) : 4=0.02% 00:09:30.939 cpu : usr=2.50%, sys=5.20%, ctx=4589, majf=0, minf=7 00:09:30.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.939 issued rwts: total=2048,2541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.939 job2: (groupid=0, jobs=1): err= 0: pid=66300: Wed Nov 20 08:22:18 2024 00:09:30.939 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:30.939 slat (nsec): min=11472, max=33200, avg=13750.61, stdev=2173.73 00:09:30.939 clat (usec): min=158, max=346, avg=224.26, stdev=31.04 00:09:30.939 lat (usec): min=171, max=362, avg=238.01, stdev=31.15 00:09:30.939 clat percentiles (usec): 00:09:30.939 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 186], 20.00th=[ 198], 00:09:30.939 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 233], 00:09:30.939 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 277], 00:09:30.939 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 338], 99.95th=[ 343], 00:09:30.939 | 99.99th=[ 347] 00:09:30.939 write: IOPS=2503, BW=9.78MiB/s (10.3MB/s)(9.79MiB/1001msec); 0 zone resets 00:09:30.939 slat (usec): min=13, max=190, avg=20.57, stdev= 5.65 00:09:30.939 clat (usec): min=109, max=613, avg=180.93, stdev=31.67 00:09:30.939 lat (usec): min=127, max=648, avg=201.50, stdev=32.77 00:09:30.939 clat percentiles (usec): 00:09:30.939 | 1.00th=[ 127], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 155], 00:09:30.939 | 30.00th=[ 163], 40.00th=[ 172], 50.00th=[ 180], 60.00th=[ 186], 00:09:30.939 | 70.00th=[ 194], 80.00th=[ 204], 90.00th=[ 219], 95.00th=[ 237], 00:09:30.939 | 99.00th=[ 260], 99.50th=[ 273], 99.90th=[ 482], 99.95th=[ 515], 00:09:30.939 | 99.99th=[ 611] 00:09:30.939 bw ( KiB/s): min= 9640, max= 9640, per=24.12%, avg=9640.00, stdev= 0.00, samples=1 00:09:30.939 iops : min= 2410, max= 2410, avg=2410.00, stdev= 0.00, samples=1 00:09:30.939 lat (usec) : 250=90.01%, 500=9.95%, 750=0.04% 00:09:30.939 cpu : usr=2.10%, sys=6.00%, ctx=4557, majf=0, minf=15 00:09:30.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.939 issued rwts: total=2048,2506,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.939 job3: (groupid=0, jobs=1): err= 0: pid=66301: Wed Nov 20 08:22:18 2024 00:09:30.939 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:30.939 slat (nsec): min=11202, max=50690, avg=14001.30, stdev=2912.56 00:09:30.939 clat (usec): min=156, max=541, avg=228.24, stdev=39.79 00:09:30.939 lat (usec): min=170, max=564, avg=242.24, stdev=40.40 00:09:30.939 clat percentiles (usec): 00:09:30.939 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 186], 20.00th=[ 198], 00:09:30.939 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 233], 00:09:30.939 | 70.00th=[ 243], 80.00th=[ 255], 90.00th=[ 273], 95.00th=[ 293], 00:09:30.939 | 99.00th=[ 379], 99.50th=[ 412], 99.90th=[ 437], 99.95th=[ 469], 00:09:30.939 | 99.99th=[ 545] 00:09:30.939 write: IOPS=2391, BW=9566KiB/s (9796kB/s)(9576KiB/1001msec); 0 zone resets 00:09:30.939 slat (nsec): min=13648, max=94512, avg=21187.41, stdev=5793.62 00:09:30.939 clat (usec): min=114, max=2067, avg=186.35, stdev=61.85 00:09:30.939 lat (usec): min=131, max=2088, avg=207.54, stdev=62.76 00:09:30.939 clat percentiles (usec): 00:09:30.939 | 1.00th=[ 123], 5.00th=[ 135], 10.00th=[ 143], 20.00th=[ 155], 00:09:30.939 | 30.00th=[ 163], 40.00th=[ 172], 50.00th=[ 184], 60.00th=[ 192], 00:09:30.939 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 231], 95.00th=[ 247], 00:09:30.939 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 783], 99.95th=[ 1549], 00:09:30.939 | 99.99th=[ 2073] 00:09:30.939 bw ( KiB/s): min= 8744, max= 8744, per=21.88%, avg=8744.00, stdev= 0.00, samples=1 00:09:30.939 iops : min= 2186, max= 2186, avg=2186.00, stdev= 0.00, samples=1 00:09:30.939 lat (usec) : 250=87.01%, 500=12.83%, 750=0.09%, 1000=0.02% 00:09:30.939 lat (msec) : 2=0.02%, 4=0.02% 00:09:30.939 cpu : usr=1.70%, sys=6.40%, ctx=4442, majf=0, minf=12 00:09:30.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.939 issued rwts: total=2048,2394,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.939 00:09:30.939 Run status group 0 (all jobs): 00:09:30.939 READ: bw=32.1MiB/s (33.6MB/s), 8184KiB/s-8296KiB/s (8380kB/s-8495kB/s), io=32.1MiB (33.7MB), run=1001-1001msec 00:09:30.939 WRITE: bw=39.0MiB/s (40.9MB/s), 9566KiB/s-9.99MiB/s (9796kB/s-10.5MB/s), io=39.1MiB (41.0MB), run=1001-1001msec 00:09:30.939 00:09:30.939 Disk stats (read/write): 00:09:30.939 nvme0n1: ios=1977/2048, merge=0/0, ticks=450/371, in_queue=821, util=88.38% 00:09:30.939 nvme0n2: ios=1951/2048, merge=0/0, ticks=453/374, in_queue=827, util=88.68% 00:09:30.939 nvme0n3: ios=1893/2048, merge=0/0, ticks=440/382, in_queue=822, util=89.92% 00:09:30.939 nvme0n4: ios=1774/2048, merge=0/0, ticks=407/393, in_queue=800, util=89.86% 00:09:30.939 08:22:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:30.939 [global] 00:09:30.939 thread=1 00:09:30.939 invalidate=1 00:09:30.939 rw=write 00:09:30.939 time_based=1 00:09:30.939 runtime=1 00:09:30.939 ioengine=libaio 00:09:30.939 direct=1 00:09:30.939 bs=4096 00:09:30.939 iodepth=128 00:09:30.939 norandommap=0 00:09:30.939 numjobs=1 00:09:30.939 00:09:30.939 verify_dump=1 00:09:30.939 verify_backlog=512 00:09:30.939 verify_state_save=0 00:09:30.939 do_verify=1 00:09:30.939 verify=crc32c-intel 00:09:30.940 [job0] 00:09:30.940 filename=/dev/nvme0n1 00:09:30.940 [job1] 00:09:30.940 filename=/dev/nvme0n2 00:09:30.940 [job2] 00:09:30.940 filename=/dev/nvme0n3 00:09:30.940 [job3] 00:09:30.940 filename=/dev/nvme0n4 00:09:30.940 Could not set queue depth (nvme0n1) 00:09:30.940 Could not set queue depth (nvme0n2) 00:09:30.940 Could not set queue depth (nvme0n3) 00:09:30.940 Could not set queue depth (nvme0n4) 00:09:30.940 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:30.940 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:30.940 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:30.940 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:30.940 fio-3.35 00:09:30.940 Starting 4 threads 00:09:32.314 00:09:32.314 job0: (groupid=0, jobs=1): err= 0: pid=66357: Wed Nov 20 08:22:19 2024 00:09:32.314 read: IOPS=3724, BW=14.5MiB/s (15.3MB/s)(14.6MiB/1004msec) 00:09:32.314 slat (usec): min=6, max=7908, avg=129.10, stdev=676.99 00:09:32.314 clat (usec): min=3211, max=23923, avg=16034.57, stdev=2434.79 00:09:32.314 lat (usec): min=3225, max=23942, avg=16163.68, stdev=2476.39 00:09:32.314 clat percentiles (usec): 00:09:32.314 | 1.00th=[ 9765], 5.00th=[11600], 10.00th=[13566], 20.00th=[15139], 00:09:32.314 | 30.00th=[15533], 40.00th=[15795], 50.00th=[16188], 60.00th=[16450], 00:09:32.314 | 70.00th=[16712], 80.00th=[17171], 90.00th=[18220], 95.00th=[20317], 00:09:32.314 | 99.00th=[22676], 99.50th=[22938], 99.90th=[23987], 99.95th=[23987], 00:09:32.314 | 99.99th=[23987] 00:09:32.314 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:09:32.314 slat (usec): min=12, max=6972, avg=117.77, stdev=502.36 00:09:32.314 clat (usec): min=7780, max=24053, avg=16306.18, stdev=1883.92 00:09:32.314 lat (usec): min=7801, max=24073, avg=16423.95, stdev=1930.50 00:09:32.314 clat percentiles (usec): 00:09:32.314 | 1.00th=[10683], 5.00th=[13304], 10.00th=[14877], 20.00th=[15401], 00:09:32.314 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16188], 60.00th=[16581], 00:09:32.314 | 70.00th=[16909], 80.00th=[16909], 90.00th=[17433], 95.00th=[19792], 00:09:32.314 | 99.00th=[22938], 99.50th=[23200], 99.90th=[23987], 99.95th=[23987], 00:09:32.314 | 99.99th=[23987] 00:09:32.314 bw ( KiB/s): min=16384, max=16384, per=28.07%, avg=16384.00, stdev= 0.00, samples=2 00:09:32.314 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:32.314 lat (msec) : 4=0.38%, 10=0.59%, 20=93.67%, 50=5.36% 00:09:32.314 cpu : usr=3.79%, sys=12.36%, ctx=483, majf=0, minf=15 00:09:32.314 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:32.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.314 issued rwts: total=3739,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.314 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.314 job1: (groupid=0, jobs=1): err= 0: pid=66358: Wed Nov 20 08:22:19 2024 00:09:32.314 read: IOPS=3889, BW=15.2MiB/s (15.9MB/s)(15.3MiB/1004msec) 00:09:32.314 slat (usec): min=5, max=5922, avg=121.34, stdev=593.93 00:09:32.314 clat (usec): min=389, max=19243, avg=15875.05, stdev=1664.43 00:09:32.314 lat (usec): min=3895, max=19252, avg=15996.39, stdev=1555.88 00:09:32.314 clat percentiles (usec): 00:09:32.314 | 1.00th=[ 7963], 5.00th=[13435], 10.00th=[15139], 20.00th=[15401], 00:09:32.314 | 30.00th=[15533], 40.00th=[15795], 50.00th=[15926], 60.00th=[16188], 00:09:32.314 | 70.00th=[16450], 80.00th=[16712], 90.00th=[17171], 95.00th=[17695], 00:09:32.314 | 99.00th=[19006], 99.50th=[19268], 99.90th=[19268], 99.95th=[19268], 00:09:32.314 | 99.99th=[19268] 00:09:32.314 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:09:32.314 slat (usec): min=10, max=7284, avg=121.23, stdev=558.27 00:09:32.314 clat (usec): min=11110, max=21893, avg=15762.32, stdev=1355.54 00:09:32.314 lat (usec): min=13329, max=21928, avg=15883.56, stdev=1247.45 00:09:32.314 clat percentiles (usec): 00:09:32.314 | 1.00th=[12256], 5.00th=[14484], 10.00th=[14615], 20.00th=[15008], 00:09:32.314 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15664], 60.00th=[15926], 00:09:32.314 | 70.00th=[16057], 80.00th=[16450], 90.00th=[16712], 95.00th=[17433], 00:09:32.314 | 99.00th=[21890], 99.50th=[21890], 99.90th=[21890], 99.95th=[21890], 00:09:32.314 | 99.99th=[21890] 00:09:32.314 bw ( KiB/s): min=16384, max=16384, per=28.07%, avg=16384.00, stdev= 0.00, samples=2 00:09:32.314 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:32.314 lat (usec) : 500=0.01% 00:09:32.315 lat (msec) : 4=0.06%, 10=0.74%, 20=97.64%, 50=1.55% 00:09:32.315 cpu : usr=3.39%, sys=11.76%, ctx=260, majf=0, minf=7 00:09:32.315 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:32.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.315 issued rwts: total=3905,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.315 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.315 job2: (groupid=0, jobs=1): err= 0: pid=66359: Wed Nov 20 08:22:19 2024 00:09:32.315 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:09:32.315 slat (usec): min=6, max=6737, avg=181.87, stdev=755.71 00:09:32.315 clat (usec): min=17310, max=30141, avg=23113.70, stdev=2010.31 00:09:32.315 lat (usec): min=17332, max=30159, avg=23295.57, stdev=2093.73 00:09:32.315 clat percentiles (usec): 00:09:32.315 | 1.00th=[17957], 5.00th=[19530], 10.00th=[20579], 20.00th=[22152], 00:09:32.315 | 30.00th=[22414], 40.00th=[22938], 50.00th=[22938], 60.00th=[23200], 00:09:32.315 | 70.00th=[23462], 80.00th=[23987], 90.00th=[26084], 95.00th=[26870], 00:09:32.315 | 99.00th=[28181], 99.50th=[29492], 99.90th=[29492], 99.95th=[29492], 00:09:32.315 | 99.99th=[30016] 00:09:32.315 write: IOPS=2888, BW=11.3MiB/s (11.8MB/s)(11.4MiB/1006msec); 0 zone resets 00:09:32.315 slat (usec): min=11, max=6979, avg=175.33, stdev=753.67 00:09:32.315 clat (usec): min=4434, max=30300, avg=23216.43, stdev=2711.59 00:09:32.315 lat (usec): min=5591, max=30390, avg=23391.76, stdev=2776.07 00:09:32.315 clat percentiles (usec): 00:09:32.315 | 1.00th=[10028], 5.00th=[19530], 10.00th=[21627], 20.00th=[22414], 00:09:32.315 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23725], 00:09:32.315 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25297], 95.00th=[27919], 00:09:32.315 | 99.00th=[29230], 99.50th=[29492], 99.90th=[30016], 99.95th=[30016], 00:09:32.315 | 99.99th=[30278] 00:09:32.315 bw ( KiB/s): min= 9944, max=12288, per=19.04%, avg=11116.00, stdev=1657.46, samples=2 00:09:32.315 iops : min= 2486, max= 3072, avg=2779.00, stdev=414.36, samples=2 00:09:32.315 lat (msec) : 10=0.57%, 20=6.49%, 50=92.94% 00:09:32.315 cpu : usr=3.28%, sys=8.86%, ctx=326, majf=0, minf=9 00:09:32.315 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:32.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.315 issued rwts: total=2560,2906,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.315 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.315 job3: (groupid=0, jobs=1): err= 0: pid=66360: Wed Nov 20 08:22:19 2024 00:09:32.315 read: IOPS=3312, BW=12.9MiB/s (13.6MB/s)(13.0MiB/1004msec) 00:09:32.315 slat (usec): min=7, max=5138, avg=140.34, stdev=642.87 00:09:32.315 clat (usec): min=3128, max=21832, avg=18317.16, stdev=2141.96 00:09:32.315 lat (usec): min=3142, max=21845, avg=18457.50, stdev=2056.29 00:09:32.315 clat percentiles (usec): 00:09:32.315 | 1.00th=[ 7963], 5.00th=[15795], 10.00th=[17171], 20.00th=[17695], 00:09:32.315 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18482], 60.00th=[19006], 00:09:32.315 | 70.00th=[19530], 80.00th=[19530], 90.00th=[20055], 95.00th=[20317], 00:09:32.315 | 99.00th=[21103], 99.50th=[21103], 99.90th=[21890], 99.95th=[21890], 00:09:32.315 | 99.99th=[21890] 00:09:32.315 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:09:32.315 slat (usec): min=10, max=4786, avg=140.40, stdev=649.00 00:09:32.315 clat (usec): min=13403, max=21188, avg=18305.88, stdev=999.52 00:09:32.315 lat (usec): min=14693, max=21208, avg=18446.27, stdev=785.40 00:09:32.315 clat percentiles (usec): 00:09:32.315 | 1.00th=[14484], 5.00th=[17171], 10.00th=[17433], 20.00th=[17695], 00:09:32.315 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18220], 60.00th=[18482], 00:09:32.315 | 70.00th=[18744], 80.00th=[19006], 90.00th=[19530], 95.00th=[20055], 00:09:32.315 | 99.00th=[20317], 99.50th=[20579], 99.90th=[21103], 99.95th=[21103], 00:09:32.315 | 99.99th=[21103] 00:09:32.315 bw ( KiB/s): min=14315, max=14328, per=24.53%, avg=14321.50, stdev= 9.19, samples=2 00:09:32.315 iops : min= 3578, max= 3582, avg=3580.00, stdev= 2.83, samples=2 00:09:32.315 lat (msec) : 4=0.33%, 10=0.56%, 20=92.11%, 50=6.99% 00:09:32.315 cpu : usr=2.59%, sys=11.27%, ctx=275, majf=0, minf=15 00:09:32.315 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:32.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.315 issued rwts: total=3326,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.315 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.315 00:09:32.315 Run status group 0 (all jobs): 00:09:32.315 READ: bw=52.5MiB/s (55.1MB/s), 9.94MiB/s-15.2MiB/s (10.4MB/s-15.9MB/s), io=52.9MiB (55.4MB), run=1004-1006msec 00:09:32.315 WRITE: bw=57.0MiB/s (59.8MB/s), 11.3MiB/s-15.9MiB/s (11.8MB/s-16.7MB/s), io=57.4MiB (60.1MB), run=1004-1006msec 00:09:32.315 00:09:32.315 Disk stats (read/write): 00:09:32.315 nvme0n1: ios=3237/3584, merge=0/0, ticks=24999/26229, in_queue=51228, util=88.57% 00:09:32.315 nvme0n2: ios=3345/3584, merge=0/0, ticks=12041/12624, in_queue=24665, util=88.48% 00:09:32.315 nvme0n3: ios=2116/2560, merge=0/0, ticks=16129/18055, in_queue=34184, util=89.29% 00:09:32.315 nvme0n4: ios=2880/3072, merge=0/0, ticks=12645/12685, in_queue=25330, util=89.75% 00:09:32.315 08:22:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:32.315 [global] 00:09:32.315 thread=1 00:09:32.315 invalidate=1 00:09:32.315 rw=randwrite 00:09:32.315 time_based=1 00:09:32.315 runtime=1 00:09:32.315 ioengine=libaio 00:09:32.315 direct=1 00:09:32.315 bs=4096 00:09:32.315 iodepth=128 00:09:32.315 norandommap=0 00:09:32.315 numjobs=1 00:09:32.315 00:09:32.315 verify_dump=1 00:09:32.315 verify_backlog=512 00:09:32.315 verify_state_save=0 00:09:32.315 do_verify=1 00:09:32.315 verify=crc32c-intel 00:09:32.315 [job0] 00:09:32.315 filename=/dev/nvme0n1 00:09:32.315 [job1] 00:09:32.315 filename=/dev/nvme0n2 00:09:32.315 [job2] 00:09:32.315 filename=/dev/nvme0n3 00:09:32.315 [job3] 00:09:32.315 filename=/dev/nvme0n4 00:09:32.315 Could not set queue depth (nvme0n1) 00:09:32.315 Could not set queue depth (nvme0n2) 00:09:32.315 Could not set queue depth (nvme0n3) 00:09:32.315 Could not set queue depth (nvme0n4) 00:09:32.315 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:32.315 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:32.315 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:32.315 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:32.315 fio-3.35 00:09:32.315 Starting 4 threads 00:09:33.692 00:09:33.692 job0: (groupid=0, jobs=1): err= 0: pid=66414: Wed Nov 20 08:22:20 2024 00:09:33.692 read: IOPS=2780, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1004msec) 00:09:33.692 slat (usec): min=6, max=9505, avg=145.63, stdev=691.05 00:09:33.692 clat (usec): min=819, max=66437, avg=18678.42, stdev=10058.82 00:09:33.692 lat (usec): min=3097, max=66461, avg=18824.05, stdev=10134.34 00:09:33.692 clat percentiles (usec): 00:09:33.692 | 1.00th=[ 5538], 5.00th=[12125], 10.00th=[13173], 20.00th=[14222], 00:09:33.692 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[15401], 00:09:33.692 | 70.00th=[16450], 80.00th=[18220], 90.00th=[29230], 95.00th=[45351], 00:09:33.692 | 99.00th=[58983], 99.50th=[58983], 99.90th=[66323], 99.95th=[66323], 00:09:33.692 | 99.99th=[66323] 00:09:33.692 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:09:33.692 slat (usec): min=12, max=17510, avg=185.02, stdev=944.85 00:09:33.692 clat (usec): min=10690, max=91456, avg=23773.91, stdev=16744.61 00:09:33.692 lat (usec): min=10704, max=91479, avg=23958.93, stdev=16866.81 00:09:33.692 clat percentiles (usec): 00:09:33.692 | 1.00th=[12518], 5.00th=[14222], 10.00th=[14484], 20.00th=[14877], 00:09:33.692 | 30.00th=[15401], 40.00th=[15795], 50.00th=[16057], 60.00th=[16581], 00:09:33.692 | 70.00th=[18744], 80.00th=[31589], 90.00th=[50070], 95.00th=[60556], 00:09:33.692 | 99.00th=[87557], 99.50th=[89654], 99.90th=[91751], 99.95th=[91751], 00:09:33.692 | 99.99th=[91751] 00:09:33.692 bw ( KiB/s): min= 8192, max=16384, per=25.69%, avg=12288.00, stdev=5792.62, samples=2 00:09:33.692 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:09:33.692 lat (usec) : 1000=0.02% 00:09:33.692 lat (msec) : 4=0.38%, 10=1.02%, 20=75.82%, 50=15.33%, 100=7.44% 00:09:33.692 cpu : usr=3.49%, sys=8.87%, ctx=409, majf=0, minf=13 00:09:33.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:33.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.692 issued rwts: total=2792,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.692 job1: (groupid=0, jobs=1): err= 0: pid=66415: Wed Nov 20 08:22:20 2024 00:09:33.692 read: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec) 00:09:33.692 slat (usec): min=7, max=28107, avg=229.15, stdev=1275.06 00:09:33.692 clat (usec): min=13932, max=56349, avg=30404.88, stdev=9243.47 00:09:33.692 lat (usec): min=13961, max=56370, avg=30634.04, stdev=9313.95 00:09:33.692 clat percentiles (usec): 00:09:33.692 | 1.00th=[15401], 5.00th=[19006], 10.00th=[19530], 20.00th=[20055], 00:09:33.692 | 30.00th=[22152], 40.00th=[27657], 50.00th=[28967], 60.00th=[33162], 00:09:33.692 | 70.00th=[38536], 80.00th=[39584], 90.00th=[41157], 95.00th=[42206], 00:09:33.692 | 99.00th=[52691], 99.50th=[54789], 99.90th=[55313], 99.95th=[55837], 00:09:33.692 | 99.99th=[56361] 00:09:33.692 write: IOPS=2463, BW=9853KiB/s (10.1MB/s)(9952KiB/1010msec); 0 zone resets 00:09:33.692 slat (usec): min=6, max=16739, avg=203.62, stdev=1029.55 00:09:33.692 clat (usec): min=7871, max=56660, avg=26418.87, stdev=11460.74 00:09:33.692 lat (usec): min=9446, max=56678, avg=26622.48, stdev=11527.77 00:09:33.692 clat percentiles (usec): 00:09:33.692 | 1.00th=[10814], 5.00th=[12125], 10.00th=[14484], 20.00th=[15926], 00:09:33.692 | 30.00th=[17957], 40.00th=[20055], 50.00th=[21627], 60.00th=[25822], 00:09:33.692 | 70.00th=[32375], 80.00th=[41157], 90.00th=[42206], 95.00th=[44303], 00:09:33.692 | 99.00th=[53740], 99.50th=[54264], 99.90th=[55837], 99.95th=[55837], 00:09:33.692 | 99.99th=[56886] 00:09:33.692 bw ( KiB/s): min= 6600, max=12263, per=19.71%, avg=9431.50, stdev=4004.35, samples=2 00:09:33.692 iops : min= 1650, max= 3065, avg=2357.50, stdev=1000.56, samples=2 00:09:33.692 lat (msec) : 10=0.15%, 20=29.94%, 50=68.14%, 100=1.76% 00:09:33.692 cpu : usr=2.58%, sys=6.94%, ctx=372, majf=0, minf=10 00:09:33.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:09:33.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.692 issued rwts: total=2048,2488,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.692 job2: (groupid=0, jobs=1): err= 0: pid=66420: Wed Nov 20 08:22:20 2024 00:09:33.692 read: IOPS=3586, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1003msec) 00:09:33.692 slat (usec): min=7, max=8616, avg=123.21, stdev=803.79 00:09:33.692 clat (usec): min=2732, max=28936, avg=17097.74, stdev=2231.77 00:09:33.692 lat (usec): min=2753, max=34618, avg=17220.95, stdev=2264.49 00:09:33.692 clat percentiles (usec): 00:09:33.692 | 1.00th=[10159], 5.00th=[15008], 10.00th=[15795], 20.00th=[16319], 00:09:33.692 | 30.00th=[16581], 40.00th=[16909], 50.00th=[16909], 60.00th=[17171], 00:09:33.692 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18744], 95.00th=[19530], 00:09:33.692 | 99.00th=[26346], 99.50th=[26870], 99.90th=[28967], 99.95th=[28967], 00:09:33.692 | 99.99th=[28967] 00:09:33.692 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:09:33.692 slat (usec): min=9, max=14412, avg=128.00, stdev=814.50 00:09:33.692 clat (usec): min=3115, max=24128, avg=16021.68, stdev=1999.76 00:09:33.692 lat (usec): min=3168, max=24152, avg=16149.68, stdev=1869.02 00:09:33.692 clat percentiles (usec): 00:09:33.692 | 1.00th=[ 9765], 5.00th=[13960], 10.00th=[14484], 20.00th=[14877], 00:09:33.692 | 30.00th=[15401], 40.00th=[15664], 50.00th=[16057], 60.00th=[16188], 00:09:33.692 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17957], 95.00th=[19006], 00:09:33.692 | 99.00th=[23987], 99.50th=[23987], 99.90th=[23987], 99.95th=[24249], 00:09:33.692 | 99.99th=[24249] 00:09:33.692 bw ( KiB/s): min=15472, max=16384, per=33.30%, avg=15928.00, stdev=644.88, samples=2 00:09:33.692 iops : min= 3868, max= 4096, avg=3982.00, stdev=161.22, samples=2 00:09:33.692 lat (msec) : 4=0.19%, 10=0.87%, 20=95.71%, 50=3.22% 00:09:33.692 cpu : usr=3.49%, sys=11.38%, ctx=166, majf=0, minf=11 00:09:33.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:33.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.692 issued rwts: total=3597,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.692 job3: (groupid=0, jobs=1): err= 0: pid=66422: Wed Nov 20 08:22:20 2024 00:09:33.692 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:09:33.692 slat (usec): min=6, max=10864, avg=228.44, stdev=990.81 00:09:33.692 clat (usec): min=14212, max=52402, avg=29320.16, stdev=9026.40 00:09:33.692 lat (usec): min=17075, max=53190, avg=29548.61, stdev=9106.11 00:09:33.692 clat percentiles (usec): 00:09:33.692 | 1.00th=[17171], 5.00th=[19792], 10.00th=[20841], 20.00th=[21627], 00:09:33.692 | 30.00th=[21890], 40.00th=[22414], 50.00th=[23725], 60.00th=[33817], 00:09:33.692 | 70.00th=[38536], 80.00th=[40109], 90.00th=[41157], 95.00th=[42206], 00:09:33.692 | 99.00th=[49546], 99.50th=[51119], 99.90th=[52167], 99.95th=[52167], 00:09:33.692 | 99.99th=[52167] 00:09:33.692 write: IOPS=2401, BW=9606KiB/s (9836kB/s)(9692KiB/1009msec); 0 zone resets 00:09:33.692 slat (usec): min=11, max=9664, avg=210.83, stdev=950.76 00:09:33.692 clat (usec): min=7377, max=53501, avg=27784.71, stdev=10359.14 00:09:33.692 lat (usec): min=9614, max=53520, avg=27995.55, stdev=10450.47 00:09:33.693 clat percentiles (usec): 00:09:33.693 | 1.00th=[13698], 5.00th=[18482], 10.00th=[19530], 20.00th=[20055], 00:09:33.693 | 30.00th=[20579], 40.00th=[21103], 50.00th=[21365], 60.00th=[21890], 00:09:33.693 | 70.00th=[39060], 80.00th=[41157], 90.00th=[42206], 95.00th=[43254], 00:09:33.693 | 99.00th=[51119], 99.50th=[52167], 99.90th=[53216], 99.95th=[53740], 00:09:33.693 | 99.99th=[53740] 00:09:33.693 bw ( KiB/s): min= 6080, max=12288, per=19.20%, avg=9184.00, stdev=4389.72, samples=2 00:09:33.693 iops : min= 1520, max= 3072, avg=2296.00, stdev=1097.43, samples=2 00:09:33.693 lat (msec) : 10=0.07%, 20=13.29%, 50=85.33%, 100=1.32% 00:09:33.693 cpu : usr=3.08%, sys=6.94%, ctx=413, majf=0, minf=13 00:09:33.693 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:09:33.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.693 issued rwts: total=2048,2423,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.693 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.693 00:09:33.693 Run status group 0 (all jobs): 00:09:33.693 READ: bw=40.6MiB/s (42.5MB/s), 8111KiB/s-14.0MiB/s (8306kB/s-14.7MB/s), io=41.0MiB (42.9MB), run=1003-1010msec 00:09:33.693 WRITE: bw=46.7MiB/s (49.0MB/s), 9606KiB/s-16.0MiB/s (9836kB/s-16.7MB/s), io=47.2MiB (49.5MB), run=1003-1010msec 00:09:33.693 00:09:33.693 Disk stats (read/write): 00:09:33.693 nvme0n1: ios=2166/2560, merge=0/0, ticks=13673/20922, in_queue=34595, util=88.37% 00:09:33.693 nvme0n2: ios=2052/2048, merge=0/0, ticks=39362/33185, in_queue=72547, util=89.27% 00:09:33.693 nvme0n3: ios=3072/3456, merge=0/0, ticks=50261/52139, in_queue=102400, util=89.18% 00:09:33.693 nvme0n4: ios=1958/2048, merge=0/0, ticks=22940/20474, in_queue=43414, util=89.73% 00:09:33.693 08:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:33.693 08:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66435 00:09:33.693 08:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:33.693 08:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:33.693 [global] 00:09:33.693 thread=1 00:09:33.693 invalidate=1 00:09:33.693 rw=read 00:09:33.693 time_based=1 00:09:33.693 runtime=10 00:09:33.693 ioengine=libaio 00:09:33.693 direct=1 00:09:33.693 bs=4096 00:09:33.693 iodepth=1 00:09:33.693 norandommap=1 00:09:33.693 numjobs=1 00:09:33.693 00:09:33.693 [job0] 00:09:33.693 filename=/dev/nvme0n1 00:09:33.693 [job1] 00:09:33.693 filename=/dev/nvme0n2 00:09:33.693 [job2] 00:09:33.693 filename=/dev/nvme0n3 00:09:33.693 [job3] 00:09:33.693 filename=/dev/nvme0n4 00:09:33.693 Could not set queue depth (nvme0n1) 00:09:33.693 Could not set queue depth (nvme0n2) 00:09:33.693 Could not set queue depth (nvme0n3) 00:09:33.693 Could not set queue depth (nvme0n4) 00:09:33.693 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.693 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.693 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.693 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.693 fio-3.35 00:09:33.693 Starting 4 threads 00:09:36.978 08:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:36.978 fio: pid=66478, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:36.978 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=37748736, buflen=4096 00:09:36.978 08:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:37.237 fio: pid=66477, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:37.237 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=45682688, buflen=4096 00:09:37.237 08:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:37.237 08:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:37.495 fio: pid=66475, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:37.495 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1347584, buflen=4096 00:09:37.495 08:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:37.495 08:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:37.754 fio: pid=66476, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:37.754 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=54837248, buflen=4096 00:09:37.754 00:09:37.754 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66475: Wed Nov 20 08:22:25 2024 00:09:37.754 read: IOPS=4622, BW=18.1MiB/s (18.9MB/s)(65.3MiB/3616msec) 00:09:37.754 slat (usec): min=10, max=12206, avg=15.42, stdev=158.89 00:09:37.754 clat (usec): min=134, max=2926, avg=199.86, stdev=40.71 00:09:37.754 lat (usec): min=147, max=12392, avg=215.28, stdev=163.98 00:09:37.754 clat percentiles (usec): 00:09:37.754 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 176], 00:09:37.754 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 202], 00:09:37.754 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 239], 95.00th=[ 255], 00:09:37.754 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 338], 99.95th=[ 457], 00:09:37.754 | 99.99th=[ 1680] 00:09:37.754 bw ( KiB/s): min=17952, max=18992, per=35.95%, avg=18523.57, stdev=453.14, samples=7 00:09:37.754 iops : min= 4488, max= 4748, avg=4630.86, stdev=113.34, samples=7 00:09:37.754 lat (usec) : 250=93.89%, 500=6.05%, 750=0.02%, 1000=0.01% 00:09:37.754 lat (msec) : 2=0.02%, 4=0.01% 00:09:37.754 cpu : usr=1.16%, sys=5.26%, ctx=16719, majf=0, minf=1 00:09:37.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:37.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.754 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.754 issued rwts: total=16714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:37.754 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66476: Wed Nov 20 08:22:25 2024 00:09:37.754 read: IOPS=3417, BW=13.3MiB/s (14.0MB/s)(52.3MiB/3918msec) 00:09:37.754 slat (usec): min=8, max=11357, avg=18.77, stdev=200.59 00:09:37.754 clat (usec): min=136, max=3767, avg=272.44, stdev=106.34 00:09:37.754 lat (usec): min=148, max=11565, avg=291.21, stdev=226.64 00:09:37.754 clat percentiles (usec): 00:09:37.754 | 1.00th=[ 149], 5.00th=[ 161], 10.00th=[ 169], 20.00th=[ 190], 00:09:37.754 | 30.00th=[ 249], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 293], 00:09:37.754 | 70.00th=[ 306], 80.00th=[ 322], 90.00th=[ 343], 95.00th=[ 363], 00:09:37.754 | 99.00th=[ 433], 99.50th=[ 553], 99.90th=[ 1090], 99.95th=[ 2737], 00:09:37.754 | 99.99th=[ 3720] 00:09:37.754 bw ( KiB/s): min=12168, max=17297, per=25.52%, avg=13147.57, stdev=1841.89, samples=7 00:09:37.754 iops : min= 3042, max= 4324, avg=3286.86, stdev=460.38, samples=7 00:09:37.754 lat (usec) : 250=30.57%, 500=68.77%, 750=0.49%, 1000=0.04% 00:09:37.754 lat (msec) : 2=0.07%, 4=0.06% 00:09:37.754 cpu : usr=1.17%, sys=4.54%, ctx=13396, majf=0, minf=1 00:09:37.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:37.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.754 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.754 issued rwts: total=13389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:37.754 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66477: Wed Nov 20 08:22:25 2024 00:09:37.754 read: IOPS=3383, BW=13.2MiB/s (13.9MB/s)(43.6MiB/3297msec) 00:09:37.754 slat (usec): min=11, max=9607, avg=17.80, stdev=114.27 00:09:37.754 clat (usec): min=192, max=2444, avg=276.37, stdev=50.76 00:09:37.754 lat (usec): min=206, max=9872, avg=294.17, stdev=125.13 00:09:37.754 clat percentiles (usec): 00:09:37.754 | 1.00th=[ 219], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 251], 00:09:37.754 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:09:37.754 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 330], 00:09:37.754 | 99.00th=[ 355], 99.50th=[ 367], 99.90th=[ 586], 99.95th=[ 832], 00:09:37.754 | 99.99th=[ 2409] 00:09:37.754 bw ( KiB/s): min=12864, max=13824, per=26.12%, avg=13458.67, stdev=391.09, samples=6 00:09:37.754 iops : min= 3216, max= 3456, avg=3364.67, stdev=97.77, samples=6 00:09:37.754 lat (usec) : 250=19.45%, 500=80.44%, 750=0.05%, 1000=0.01% 00:09:37.754 lat (msec) : 2=0.02%, 4=0.03% 00:09:37.754 cpu : usr=1.06%, sys=4.52%, ctx=11157, majf=0, minf=2 00:09:37.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:37.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.754 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.755 issued rwts: total=11154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.755 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:37.755 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66478: Wed Nov 20 08:22:25 2024 00:09:37.755 read: IOPS=3126, BW=12.2MiB/s (12.8MB/s)(36.0MiB/2948msec) 00:09:37.755 slat (usec): min=8, max=159, avg=15.64, stdev= 7.06 00:09:37.755 clat (usec): min=176, max=2256, avg=302.64, stdev=49.24 00:09:37.755 lat (usec): min=191, max=2267, avg=318.28, stdev=50.21 00:09:37.755 clat percentiles (usec): 00:09:37.755 | 1.00th=[ 215], 5.00th=[ 251], 10.00th=[ 260], 20.00th=[ 273], 00:09:37.755 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 306], 00:09:37.755 | 70.00th=[ 318], 80.00th=[ 330], 90.00th=[ 347], 95.00th=[ 367], 00:09:37.755 | 99.00th=[ 416], 99.50th=[ 506], 99.90th=[ 709], 99.95th=[ 807], 00:09:37.755 | 99.99th=[ 2245] 00:09:37.755 bw ( KiB/s): min=12176, max=12608, per=24.21%, avg=12473.60, stdev=171.71, samples=5 00:09:37.755 iops : min= 3044, max= 3152, avg=3118.40, stdev=42.93, samples=5 00:09:37.755 lat (usec) : 250=4.78%, 500=94.68%, 750=0.46%, 1000=0.04% 00:09:37.755 lat (msec) : 2=0.01%, 4=0.01% 00:09:37.755 cpu : usr=0.58%, sys=5.26%, ctx=9219, majf=0, minf=1 00:09:37.755 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:37.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.755 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.755 issued rwts: total=9217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.755 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:37.755 00:09:37.755 Run status group 0 (all jobs): 00:09:37.755 READ: bw=50.3MiB/s (52.8MB/s), 12.2MiB/s-18.1MiB/s (12.8MB/s-18.9MB/s), io=197MiB (207MB), run=2948-3918msec 00:09:37.755 00:09:37.755 Disk stats (read/write): 00:09:37.755 nvme0n1: ios=16648/0, merge=0/0, ticks=3374/0, in_queue=3374, util=95.24% 00:09:37.755 nvme0n2: ios=13084/0, merge=0/0, ticks=3438/0, in_queue=3438, util=95.35% 00:09:37.755 nvme0n3: ios=10439/0, merge=0/0, ticks=2951/0, in_queue=2951, util=96.38% 00:09:37.755 nvme0n4: ios=8914/0, merge=0/0, ticks=2620/0, in_queue=2620, util=96.71% 00:09:37.755 08:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:37.755 08:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:38.013 08:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:38.013 08:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:38.579 08:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:38.579 08:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:38.836 08:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:38.837 08:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:39.095 08:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:39.095 08:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:39.353 08:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:39.353 08:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66435 00:09:39.354 08:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:39.354 08:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:39.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.354 08:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:39.354 08:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1226 -- # local i=0 00:09:39.354 08:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -o NAME,SERIAL 00:09:39.354 08:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:39.354 08:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1234 -- # lsblk -l -o NAME,SERIAL 00:09:39.354 08:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1234 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:39.354 nvmf hotplug test: fio failed as expected 00:09:39.354 08:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1238 -- # return 0 00:09:39.354 08:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:39.354 08:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:39.354 08:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:39.612 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:39.613 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:39.613 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:39.613 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:39.613 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:39.613 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:39.613 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:39.872 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:39.872 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:39.872 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:39.872 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:39.872 rmmod nvme_tcp 00:09:39.872 rmmod nvme_fabrics 00:09:39.872 rmmod nvme_keyring 00:09:39.872 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:39.872 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:39.872 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:39.872 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66047 ']' 00:09:39.872 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66047 00:09:39.872 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' -z 66047 ']' 00:09:39.872 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@961 -- # kill -0 66047 00:09:39.872 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # uname 00:09:39.872 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:09:39.872 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 66047 00:09:39.872 killing process with pid 66047 00:09:39.872 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:09:39.872 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:09:39.872 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@975 -- # echo 'killing process with pid 66047' 00:09:39.872 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # kill 66047 00:09:39.872 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@981 -- # wait 66047 00:09:40.131 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:40.131 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:40.131 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:40.131 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:40.131 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:40.131 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:40.131 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:40.131 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:40.131 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:40.131 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:40.131 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:40.131 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:40.131 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:40.131 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:40.131 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:40.131 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:40.131 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:40.131 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:40.131 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:40.392 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:40.392 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:40.392 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:40.392 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:40.392 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.392 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.392 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.392 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:09:40.392 ************************************ 00:09:40.392 END TEST nvmf_fio_target 00:09:40.392 ************************************ 00:09:40.392 00:09:40.392 real 0m21.021s 00:09:40.392 user 1m19.874s 00:09:40.392 sys 0m9.453s 00:09:40.392 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1133 -- # xtrace_disable 00:09:40.392 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.392 08:22:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:40.392 08:22:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:09:40.392 08:22:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1114 -- # xtrace_disable 00:09:40.392 08:22:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:40.392 ************************************ 00:09:40.392 START TEST nvmf_bdevio 00:09:40.392 ************************************ 00:09:40.392 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:40.654 * Looking for test storage... 00:09:40.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:40.654 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:09:40.654 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1638 -- # lcov --version 00:09:40.654 08:22:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:09:40.654 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:09:40.654 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.654 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.654 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.654 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.654 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.654 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.654 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.654 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.654 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.654 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.654 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.654 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:40.654 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:40.654 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.654 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.654 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:40.654 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:40.654 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.654 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:40.654 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:09:40.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.655 --rc genhtml_branch_coverage=1 00:09:40.655 --rc genhtml_function_coverage=1 00:09:40.655 --rc genhtml_legend=1 00:09:40.655 --rc geninfo_all_blocks=1 00:09:40.655 --rc geninfo_unexecuted_blocks=1 00:09:40.655 00:09:40.655 ' 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:09:40.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.655 --rc genhtml_branch_coverage=1 00:09:40.655 --rc genhtml_function_coverage=1 00:09:40.655 --rc genhtml_legend=1 00:09:40.655 --rc geninfo_all_blocks=1 00:09:40.655 --rc geninfo_unexecuted_blocks=1 00:09:40.655 00:09:40.655 ' 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:09:40.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.655 --rc genhtml_branch_coverage=1 00:09:40.655 --rc genhtml_function_coverage=1 00:09:40.655 --rc genhtml_legend=1 00:09:40.655 --rc geninfo_all_blocks=1 00:09:40.655 --rc geninfo_unexecuted_blocks=1 00:09:40.655 00:09:40.655 ' 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:09:40.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.655 --rc genhtml_branch_coverage=1 00:09:40.655 --rc genhtml_function_coverage=1 00:09:40.655 --rc genhtml_legend=1 00:09:40.655 --rc geninfo_all_blocks=1 00:09:40.655 --rc geninfo_unexecuted_blocks=1 00:09:40.655 00:09:40.655 ' 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.655 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:40.656 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:40.656 Cannot find device "nvmf_init_br" 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:40.656 Cannot find device "nvmf_init_br2" 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:40.656 Cannot find device "nvmf_tgt_br" 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:40.656 Cannot find device "nvmf_tgt_br2" 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:40.656 Cannot find device "nvmf_init_br" 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:40.656 Cannot find device "nvmf_init_br2" 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:40.656 Cannot find device "nvmf_tgt_br" 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:40.656 Cannot find device "nvmf_tgt_br2" 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:09:40.656 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:40.915 Cannot find device "nvmf_br" 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:40.915 Cannot find device "nvmf_init_if" 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:40.915 Cannot find device "nvmf_init_if2" 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:40.915 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:40.915 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:40.915 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:41.174 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:41.174 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:09:41.174 00:09:41.174 --- 10.0.0.3 ping statistics --- 00:09:41.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.174 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:41.174 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:41.174 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:09:41.174 00:09:41.174 --- 10.0.0.4 ping statistics --- 00:09:41.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.174 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:41.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:09:41.174 00:09:41.174 --- 10.0.0.1 ping statistics --- 00:09:41.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.174 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:41.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:09:41.174 00:09:41.174 --- 10.0.0.2 ping statistics --- 00:09:41.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.174 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66810 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66810 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # '[' -z 66810 ']' 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@843 -- # local max_retries=100 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@847 -- # xtrace_disable 00:09:41.174 08:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.174 [2024-11-20 08:22:28.610109] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:09:41.174 [2024-11-20 08:22:28.610203] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.432 [2024-11-20 08:22:28.766206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.432 [2024-11-20 08:22:28.859716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.432 [2024-11-20 08:22:28.859793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.432 [2024-11-20 08:22:28.859821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.432 [2024-11-20 08:22:28.859834] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.432 [2024-11-20 08:22:28.859843] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.432 [2024-11-20 08:22:28.861780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:41.432 [2024-11-20 08:22:28.861951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:41.432 [2024-11-20 08:22:28.862019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:41.432 [2024-11-20 08:22:28.862022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.432 [2024-11-20 08:22:28.939787] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:42.368 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:09:42.368 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@871 -- # return 0 00:09:42.368 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:42.368 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@735 -- # xtrace_disable 00:09:42.368 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.368 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.368 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:42.368 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:42.368 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.368 [2024-11-20 08:22:29.736296] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.368 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:42.368 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:42.368 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:42.368 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.368 Malloc0 00:09:42.368 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:42.368 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:42.368 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:42.368 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.368 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:42.369 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:42.369 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:42.369 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.369 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:42.369 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:42.369 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:42.369 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.369 [2024-11-20 08:22:29.809082] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:42.369 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:42.369 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:42.369 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:42.369 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:42.369 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:42.369 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:42.369 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:42.369 { 00:09:42.369 "params": { 00:09:42.369 "name": "Nvme$subsystem", 00:09:42.369 "trtype": "$TEST_TRANSPORT", 00:09:42.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:42.369 "adrfam": "ipv4", 00:09:42.369 "trsvcid": "$NVMF_PORT", 00:09:42.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:42.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:42.369 "hdgst": ${hdgst:-false}, 00:09:42.369 "ddgst": ${ddgst:-false} 00:09:42.369 }, 00:09:42.369 "method": "bdev_nvme_attach_controller" 00:09:42.369 } 00:09:42.369 EOF 00:09:42.369 )") 00:09:42.369 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:42.369 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:42.369 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:42.369 08:22:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:42.369 "params": { 00:09:42.369 "name": "Nvme1", 00:09:42.369 "trtype": "tcp", 00:09:42.369 "traddr": "10.0.0.3", 00:09:42.369 "adrfam": "ipv4", 00:09:42.369 "trsvcid": "4420", 00:09:42.369 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:42.369 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:42.369 "hdgst": false, 00:09:42.369 "ddgst": false 00:09:42.369 }, 00:09:42.369 "method": "bdev_nvme_attach_controller" 00:09:42.369 }' 00:09:42.369 [2024-11-20 08:22:29.860449] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:09:42.369 [2024-11-20 08:22:29.860517] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66846 ] 00:09:42.628 [2024-11-20 08:22:30.009628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:42.628 [2024-11-20 08:22:30.078989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.628 [2024-11-20 08:22:30.079169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.628 [2024-11-20 08:22:30.079421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.628 [2024-11-20 08:22:30.145396] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:42.887 I/O targets: 00:09:42.887 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:42.887 00:09:42.887 00:09:42.887 CUnit - A unit testing framework for C - Version 2.1-3 00:09:42.887 http://cunit.sourceforge.net/ 00:09:42.887 00:09:42.887 00:09:42.887 Suite: bdevio tests on: Nvme1n1 00:09:42.887 Test: blockdev write read block ...passed 00:09:42.887 Test: blockdev write zeroes read block ...passed 00:09:42.887 Test: blockdev write zeroes read no split ...passed 00:09:42.887 Test: blockdev write zeroes read split ...passed 00:09:42.887 Test: blockdev write zeroes read split partial ...passed 00:09:42.887 Test: blockdev reset ...[2024-11-20 08:22:30.299414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:42.887 [2024-11-20 08:22:30.299821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231e180 (9): Bad file descriptor 00:09:42.887 [2024-11-20 08:22:30.315703] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:42.887 passed 00:09:42.887 Test: blockdev write read 8 blocks ...passed 00:09:42.888 Test: blockdev write read size > 128k ...passed 00:09:42.888 Test: blockdev write read invalid size ...passed 00:09:42.888 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:42.888 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:42.888 Test: blockdev write read max offset ...passed 00:09:42.888 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:42.888 Test: blockdev writev readv 8 blocks ...passed 00:09:42.888 Test: blockdev writev readv 30 x 1block ...passed 00:09:42.888 Test: blockdev writev readv block ...passed 00:09:42.888 Test: blockdev writev readv size > 128k ...passed 00:09:42.888 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:42.888 Test: blockdev comparev and writev ...[2024-11-20 08:22:30.324555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.888 [2024-11-20 08:22:30.324601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:42.888 [2024-11-20 08:22:30.324627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.888 [2024-11-20 08:22:30.324641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:42.888 [2024-11-20 08:22:30.325065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.888 [2024-11-20 08:22:30.325095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:42.888 [2024-11-20 08:22:30.325118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.888 [2024-11-20 08:22:30.325131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:42.888 [2024-11-20 08:22:30.325538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.888 [2024-11-20 08:22:30.325567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:42.888 [2024-11-20 08:22:30.325598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.888 [2024-11-20 08:22:30.325611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:42.888 [2024-11-20 08:22:30.325997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.888 [2024-11-20 08:22:30.326024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:42.888 [2024-11-20 08:22:30.326046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.888 [2024-11-20 08:22:30.326058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:42.888 passed 00:09:42.888 Test: blockdev nvme passthru rw ...passed 00:09:42.888 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:22:30.327062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:42.888 [2024-11-20 08:22:30.327092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:42.888 [2024-11-20 08:22:30.327230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:42.888 [2024-11-20 08:22:30.327257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:42.888 [2024-11-20 08:22:30.327387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:42.888 [2024-11-20 08:22:30.327412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:42.888 [2024-11-20 08:22:30.327542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:42.888 [2024-11-20 08:22:30.327581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:42.888 passed 00:09:42.888 Test: blockdev nvme admin passthru ...passed 00:09:42.888 Test: blockdev copy ...passed 00:09:42.888 00:09:42.888 Run Summary: Type Total Ran Passed Failed Inactive 00:09:42.888 suites 1 1 n/a 0 0 00:09:42.888 tests 23 23 23 0 0 00:09:42.888 asserts 152 152 152 0 n/a 00:09:42.888 00:09:42.888 Elapsed time = 0.146 seconds 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:43.147 rmmod nvme_tcp 00:09:43.147 rmmod nvme_fabrics 00:09:43.147 rmmod nvme_keyring 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66810 ']' 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66810 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' -z 66810 ']' 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@961 -- # kill -0 66810 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # uname 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 66810 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@963 -- # process_name=reactor_3 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # '[' reactor_3 = sudo ']' 00:09:43.147 killing process with pid 66810 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@975 -- # echo 'killing process with pid 66810' 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # kill 66810 00:09:43.147 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@981 -- # wait 66810 00:09:43.715 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:43.715 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:43.715 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:43.715 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:43.715 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:43.715 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:43.715 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:43.715 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:43.715 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:43.715 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:43.715 08:22:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:43.715 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:43.715 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:43.715 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:43.715 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:43.715 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:43.715 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:43.715 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:43.715 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:43.715 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:43.715 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:43.715 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:43.715 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:43.715 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.715 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.715 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.715 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:09:43.715 00:09:43.715 real 0m3.376s 00:09:43.715 user 0m9.897s 00:09:43.715 sys 0m1.005s 00:09:43.715 ************************************ 00:09:43.715 END TEST nvmf_bdevio 00:09:43.715 ************************************ 00:09:43.715 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1133 -- # xtrace_disable 00:09:43.715 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.715 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:43.715 00:09:43.715 real 2m36.289s 00:09:43.715 user 6m52.738s 00:09:43.715 sys 0m52.229s 00:09:43.715 08:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1133 -- # xtrace_disable 00:09:43.715 08:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:43.715 ************************************ 00:09:43.715 END TEST nvmf_target_core 00:09:43.715 ************************************ 00:09:43.973 08:22:31 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:43.973 08:22:31 nvmf_tcp -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:09:43.973 08:22:31 nvmf_tcp -- common/autotest_common.sh@1114 -- # xtrace_disable 00:09:43.973 08:22:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:43.973 ************************************ 00:09:43.973 START TEST nvmf_target_extra 00:09:43.973 ************************************ 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:43.973 * Looking for test storage... 00:09:43.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1638 -- # lcov --version 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:09:43.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.973 --rc genhtml_branch_coverage=1 00:09:43.973 --rc genhtml_function_coverage=1 00:09:43.973 --rc genhtml_legend=1 00:09:43.973 --rc geninfo_all_blocks=1 00:09:43.973 --rc geninfo_unexecuted_blocks=1 00:09:43.973 00:09:43.973 ' 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:09:43.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.973 --rc genhtml_branch_coverage=1 00:09:43.973 --rc genhtml_function_coverage=1 00:09:43.973 --rc genhtml_legend=1 00:09:43.973 --rc geninfo_all_blocks=1 00:09:43.973 --rc geninfo_unexecuted_blocks=1 00:09:43.973 00:09:43.973 ' 00:09:43.973 08:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:09:43.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.973 --rc genhtml_branch_coverage=1 00:09:43.973 --rc genhtml_function_coverage=1 00:09:43.973 --rc genhtml_legend=1 00:09:43.973 --rc geninfo_all_blocks=1 00:09:43.974 --rc geninfo_unexecuted_blocks=1 00:09:43.974 00:09:43.974 ' 00:09:43.974 08:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:09:43.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.974 --rc genhtml_branch_coverage=1 00:09:43.974 --rc genhtml_function_coverage=1 00:09:43.974 --rc genhtml_legend=1 00:09:43.974 --rc geninfo_all_blocks=1 00:09:43.974 --rc geninfo_unexecuted_blocks=1 00:09:43.974 00:09:43.974 ' 00:09:43.974 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:44.232 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:44.233 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1114 -- # xtrace_disable 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:44.233 ************************************ 00:09:44.233 START TEST nvmf_auth_target 00:09:44.233 ************************************ 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:44.233 * Looking for test storage... 00:09:44.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1638 -- # lcov --version 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.233 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:09:44.493 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:09:44.493 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.493 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:09:44.493 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.493 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.493 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.493 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:09:44.493 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.493 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:09:44.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.493 --rc genhtml_branch_coverage=1 00:09:44.493 --rc genhtml_function_coverage=1 00:09:44.493 --rc genhtml_legend=1 00:09:44.493 --rc geninfo_all_blocks=1 00:09:44.493 --rc geninfo_unexecuted_blocks=1 00:09:44.493 00:09:44.493 ' 00:09:44.493 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:09:44.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.493 --rc genhtml_branch_coverage=1 00:09:44.493 --rc genhtml_function_coverage=1 00:09:44.493 --rc genhtml_legend=1 00:09:44.493 --rc geninfo_all_blocks=1 00:09:44.493 --rc geninfo_unexecuted_blocks=1 00:09:44.493 00:09:44.493 ' 00:09:44.493 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:09:44.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.493 --rc genhtml_branch_coverage=1 00:09:44.493 --rc genhtml_function_coverage=1 00:09:44.493 --rc genhtml_legend=1 00:09:44.493 --rc geninfo_all_blocks=1 00:09:44.493 --rc geninfo_unexecuted_blocks=1 00:09:44.493 00:09:44.493 ' 00:09:44.493 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:09:44.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.493 --rc genhtml_branch_coverage=1 00:09:44.493 --rc genhtml_function_coverage=1 00:09:44.493 --rc genhtml_legend=1 00:09:44.493 --rc geninfo_all_blocks=1 00:09:44.493 --rc geninfo_unexecuted_blocks=1 00:09:44.493 00:09:44.493 ' 00:09:44.493 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:44.493 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:44.493 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.493 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.493 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.493 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.493 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.493 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.493 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.493 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.493 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:44.494 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:44.494 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:44.495 Cannot find device "nvmf_init_br" 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:44.495 Cannot find device "nvmf_init_br2" 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:44.495 Cannot find device "nvmf_tgt_br" 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:44.495 Cannot find device "nvmf_tgt_br2" 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:44.495 Cannot find device "nvmf_init_br" 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:44.495 Cannot find device "nvmf_init_br2" 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:44.495 Cannot find device "nvmf_tgt_br" 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:44.495 Cannot find device "nvmf_tgt_br2" 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:44.495 Cannot find device "nvmf_br" 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:44.495 Cannot find device "nvmf_init_if" 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:44.495 Cannot find device "nvmf_init_if2" 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:44.495 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:44.495 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:44.495 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:44.495 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:44.754 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:44.754 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:09:44.754 00:09:44.754 --- 10.0.0.3 ping statistics --- 00:09:44.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.754 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:44.754 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:44.754 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:09:44.754 00:09:44.754 --- 10.0.0.4 ping statistics --- 00:09:44.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.754 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:44.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:44.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:44.754 00:09:44.754 --- 10.0.0.1 ping statistics --- 00:09:44.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.754 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:44.754 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:44.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:44.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:09:44.755 00:09:44.755 --- 10.0.0.2 ping statistics --- 00:09:44.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.755 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:09:44.755 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:44.755 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:09:44.755 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:44.755 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:44.755 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:44.755 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:44.755 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:44.755 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:44.755 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:44.755 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:09:44.755 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:44.755 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:44.755 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.755 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67146 00:09:44.755 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:44.755 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67146 00:09:44.755 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # '[' -z 67146 ']' 00:09:44.755 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.755 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@843 -- # local max_retries=100 00:09:44.755 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.755 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@847 -- # xtrace_disable 00:09:44.755 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@871 -- # return 0 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@735 -- # xtrace_disable 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67170 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=52d550f676e79cd323ab08a12b635370b41b7540ddf7238f 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.pSR 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 52d550f676e79cd323ab08a12b635370b41b7540ddf7238f 0 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 52d550f676e79cd323ab08a12b635370b41b7540ddf7238f 0 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=52d550f676e79cd323ab08a12b635370b41b7540ddf7238f 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.pSR 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.pSR 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.pSR 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:45.322 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=de867d6f03b7e0a7f6d66ae79c24be0c52085b254f63c22e7ca4ca6b67341e66 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.AM6 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key de867d6f03b7e0a7f6d66ae79c24be0c52085b254f63c22e7ca4ca6b67341e66 3 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 de867d6f03b7e0a7f6d66ae79c24be0c52085b254f63c22e7ca4ca6b67341e66 3 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=de867d6f03b7e0a7f6d66ae79c24be0c52085b254f63c22e7ca4ca6b67341e66 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.AM6 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.AM6 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.AM6 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=74d9a168e8370b8c8170a2aba8d55aa9 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.tz9 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 74d9a168e8370b8c8170a2aba8d55aa9 1 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 74d9a168e8370b8c8170a2aba8d55aa9 1 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=74d9a168e8370b8c8170a2aba8d55aa9 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:45.323 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.tz9 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.tz9 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.tz9 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f7fd839448c91fe5c09ba960275cb4624dbfb8bb37290a6a 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.J0N 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f7fd839448c91fe5c09ba960275cb4624dbfb8bb37290a6a 2 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f7fd839448c91fe5c09ba960275cb4624dbfb8bb37290a6a 2 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f7fd839448c91fe5c09ba960275cb4624dbfb8bb37290a6a 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.J0N 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.J0N 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.J0N 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:45.582 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6e2ddbbfdb2ce6da3dc8ade73abf67dedf5854e569c3e72c 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.CCX 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6e2ddbbfdb2ce6da3dc8ade73abf67dedf5854e569c3e72c 2 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6e2ddbbfdb2ce6da3dc8ade73abf67dedf5854e569c3e72c 2 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6e2ddbbfdb2ce6da3dc8ade73abf67dedf5854e569c3e72c 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.CCX 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.CCX 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.CCX 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=448c08822abc1cac57f07dac8dfc5ead 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.5Be 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 448c08822abc1cac57f07dac8dfc5ead 1 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 448c08822abc1cac57f07dac8dfc5ead 1 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=448c08822abc1cac57f07dac8dfc5ead 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.5Be 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.5Be 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.5Be 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1a04d64efadf14b77de23c1ba536a04a278c796609e83e3ed467b069ddab81e7 00:09:45.582 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:45.842 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.DCk 00:09:45.842 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1a04d64efadf14b77de23c1ba536a04a278c796609e83e3ed467b069ddab81e7 3 00:09:45.842 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1a04d64efadf14b77de23c1ba536a04a278c796609e83e3ed467b069ddab81e7 3 00:09:45.842 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:45.842 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:45.842 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1a04d64efadf14b77de23c1ba536a04a278c796609e83e3ed467b069ddab81e7 00:09:45.842 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:45.842 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:45.842 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.DCk 00:09:45.842 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.DCk 00:09:45.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.842 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.DCk 00:09:45.842 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:09:45.842 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67146 00:09:45.842 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # '[' -z 67146 ']' 00:09:45.842 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.842 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@843 -- # local max_retries=100 00:09:45.842 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.842 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@847 -- # xtrace_disable 00:09:45.842 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:46.101 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:09:46.101 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@871 -- # return 0 00:09:46.101 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67170 /var/tmp/host.sock 00:09:46.101 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # '[' -z 67170 ']' 00:09:46.101 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/host.sock 00:09:46.101 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@843 -- # local max_retries=100 00:09:46.101 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:46.101 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@847 -- # xtrace_disable 00:09:46.101 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.359 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:09:46.359 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@871 -- # return 0 00:09:46.359 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:09:46.359 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:46.359 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.359 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:46.359 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:46.359 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pSR 00:09:46.359 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:46.359 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.360 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:46.360 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.pSR 00:09:46.360 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.pSR 00:09:46.617 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.AM6 ]] 00:09:46.617 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AM6 00:09:46.617 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:46.617 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.617 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:46.617 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AM6 00:09:46.617 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AM6 00:09:46.876 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:46.876 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.tz9 00:09:46.876 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:46.876 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.876 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:46.876 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.tz9 00:09:46.876 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.tz9 00:09:47.134 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.J0N ]] 00:09:47.135 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.J0N 00:09:47.135 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:47.135 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.135 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:47.135 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.J0N 00:09:47.135 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.J0N 00:09:47.701 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:47.702 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.CCX 00:09:47.702 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:47.702 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.702 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:47.702 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.CCX 00:09:47.702 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.CCX 00:09:47.960 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.5Be ]] 00:09:47.960 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5Be 00:09:47.960 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:47.960 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.960 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:47.960 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5Be 00:09:47.960 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5Be 00:09:48.219 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:48.219 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.DCk 00:09:48.219 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:48.219 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.219 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:48.219 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.DCk 00:09:48.219 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.DCk 00:09:48.478 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:09:48.478 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:09:48.478 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:48.478 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:48.478 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:48.478 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:48.738 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:09:48.738 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:48.738 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:48.738 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:48.738 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:48.738 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:48.738 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:48.738 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:48.738 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.738 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:48.738 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:48.738 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:48.738 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:48.997 00:09:48.997 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:48.997 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:48.997 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:49.564 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:49.564 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:49.564 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:49.564 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.564 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:49.564 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:49.564 { 00:09:49.564 "cntlid": 1, 00:09:49.564 "qid": 0, 00:09:49.564 "state": "enabled", 00:09:49.564 "thread": "nvmf_tgt_poll_group_000", 00:09:49.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:09:49.564 "listen_address": { 00:09:49.564 "trtype": "TCP", 00:09:49.564 "adrfam": "IPv4", 00:09:49.564 "traddr": "10.0.0.3", 00:09:49.564 "trsvcid": "4420" 00:09:49.564 }, 00:09:49.564 "peer_address": { 00:09:49.564 "trtype": "TCP", 00:09:49.564 "adrfam": "IPv4", 00:09:49.564 "traddr": "10.0.0.1", 00:09:49.564 "trsvcid": "54928" 00:09:49.564 }, 00:09:49.564 "auth": { 00:09:49.564 "state": "completed", 00:09:49.564 "digest": "sha256", 00:09:49.564 "dhgroup": "null" 00:09:49.564 } 00:09:49.564 } 00:09:49.564 ]' 00:09:49.564 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:49.564 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:49.564 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:49.565 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:49.565 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:49.565 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:49.565 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:49.565 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:49.823 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:09:49.823 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:09:55.087 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:55.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:55.087 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:09:55.087 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:55.087 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.087 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:55.087 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:55.087 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:55.087 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:55.087 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:09:55.087 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:55.087 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:55.087 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:55.087 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:55.087 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:55.087 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:55.087 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:55.087 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.087 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:55.087 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:55.087 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:55.087 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:55.087 00:09:55.087 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:55.087 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:55.087 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:55.345 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:55.345 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:55.345 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:55.345 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.345 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:55.345 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:55.345 { 00:09:55.345 "cntlid": 3, 00:09:55.345 "qid": 0, 00:09:55.345 "state": "enabled", 00:09:55.345 "thread": "nvmf_tgt_poll_group_000", 00:09:55.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:09:55.345 "listen_address": { 00:09:55.345 "trtype": "TCP", 00:09:55.345 "adrfam": "IPv4", 00:09:55.345 "traddr": "10.0.0.3", 00:09:55.345 "trsvcid": "4420" 00:09:55.345 }, 00:09:55.345 "peer_address": { 00:09:55.345 "trtype": "TCP", 00:09:55.345 "adrfam": "IPv4", 00:09:55.345 "traddr": "10.0.0.1", 00:09:55.345 "trsvcid": "56554" 00:09:55.345 }, 00:09:55.345 "auth": { 00:09:55.345 "state": "completed", 00:09:55.345 "digest": "sha256", 00:09:55.345 "dhgroup": "null" 00:09:55.345 } 00:09:55.345 } 00:09:55.345 ]' 00:09:55.345 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:55.345 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:55.345 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:55.345 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:55.345 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:55.345 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:55.345 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:55.345 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:55.910 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:09:55.910 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:09:56.477 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:56.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:56.477 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:09:56.477 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:56.477 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.477 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:56.477 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:56.477 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:56.477 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:57.043 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:09:57.043 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:57.043 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:57.043 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:57.043 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:57.043 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:57.043 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:57.043 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:57.043 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.043 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:57.043 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:57.043 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:57.043 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:57.302 00:09:57.302 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:57.302 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:57.302 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:57.562 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:57.562 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:57.562 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:57.562 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.562 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:57.562 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:57.562 { 00:09:57.562 "cntlid": 5, 00:09:57.562 "qid": 0, 00:09:57.562 "state": "enabled", 00:09:57.562 "thread": "nvmf_tgt_poll_group_000", 00:09:57.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:09:57.562 "listen_address": { 00:09:57.562 "trtype": "TCP", 00:09:57.562 "adrfam": "IPv4", 00:09:57.562 "traddr": "10.0.0.3", 00:09:57.562 "trsvcid": "4420" 00:09:57.562 }, 00:09:57.562 "peer_address": { 00:09:57.562 "trtype": "TCP", 00:09:57.562 "adrfam": "IPv4", 00:09:57.562 "traddr": "10.0.0.1", 00:09:57.562 "trsvcid": "56588" 00:09:57.562 }, 00:09:57.562 "auth": { 00:09:57.562 "state": "completed", 00:09:57.562 "digest": "sha256", 00:09:57.562 "dhgroup": "null" 00:09:57.562 } 00:09:57.562 } 00:09:57.562 ]' 00:09:57.562 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:57.562 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:57.562 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:57.821 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:57.821 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:57.821 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:57.821 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:57.821 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:58.080 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:09:58.080 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:09:58.648 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:58.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:58.648 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:09:58.648 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:58.648 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.648 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:58.648 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:58.648 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:58.648 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:58.907 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:09:58.907 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:58.907 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:58.907 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:58.907 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:58.907 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:58.907 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key3 00:09:58.907 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:58.907 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.907 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:58.907 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:58.907 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:58.907 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:59.475 00:09:59.475 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:59.475 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:59.475 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:59.475 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:59.475 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:59.475 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:09:59.475 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.475 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:09:59.475 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:59.475 { 00:09:59.475 "cntlid": 7, 00:09:59.475 "qid": 0, 00:09:59.475 "state": "enabled", 00:09:59.475 "thread": "nvmf_tgt_poll_group_000", 00:09:59.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:09:59.475 "listen_address": { 00:09:59.475 "trtype": "TCP", 00:09:59.475 "adrfam": "IPv4", 00:09:59.475 "traddr": "10.0.0.3", 00:09:59.475 "trsvcid": "4420" 00:09:59.475 }, 00:09:59.475 "peer_address": { 00:09:59.475 "trtype": "TCP", 00:09:59.475 "adrfam": "IPv4", 00:09:59.475 "traddr": "10.0.0.1", 00:09:59.475 "trsvcid": "56624" 00:09:59.475 }, 00:09:59.475 "auth": { 00:09:59.475 "state": "completed", 00:09:59.475 "digest": "sha256", 00:09:59.475 "dhgroup": "null" 00:09:59.475 } 00:09:59.475 } 00:09:59.475 ]' 00:09:59.735 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:59.735 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:59.735 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:59.735 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:59.735 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:59.735 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:59.735 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:59.735 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:59.994 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:09:59.994 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:10:00.929 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:00.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:00.929 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:00.929 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:00.929 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.929 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:00.929 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:00.929 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:00.929 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:00.929 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:00.929 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:10:00.929 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:00.929 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:00.929 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:00.929 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:00.929 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:00.929 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:00.929 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:00.929 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.929 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:00.929 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:00.929 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:00.929 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:01.188 00:10:01.447 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:01.447 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:01.447 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:01.706 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:01.706 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:01.706 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:01.706 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.706 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:01.706 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:01.706 { 00:10:01.706 "cntlid": 9, 00:10:01.706 "qid": 0, 00:10:01.706 "state": "enabled", 00:10:01.706 "thread": "nvmf_tgt_poll_group_000", 00:10:01.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:01.706 "listen_address": { 00:10:01.706 "trtype": "TCP", 00:10:01.706 "adrfam": "IPv4", 00:10:01.706 "traddr": "10.0.0.3", 00:10:01.706 "trsvcid": "4420" 00:10:01.706 }, 00:10:01.706 "peer_address": { 00:10:01.706 "trtype": "TCP", 00:10:01.706 "adrfam": "IPv4", 00:10:01.706 "traddr": "10.0.0.1", 00:10:01.706 "trsvcid": "56654" 00:10:01.706 }, 00:10:01.706 "auth": { 00:10:01.706 "state": "completed", 00:10:01.706 "digest": "sha256", 00:10:01.706 "dhgroup": "ffdhe2048" 00:10:01.706 } 00:10:01.706 } 00:10:01.706 ]' 00:10:01.706 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:01.706 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:01.706 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:01.707 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:01.707 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:01.707 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:01.707 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:01.707 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:02.275 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:10:02.275 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:10:02.843 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:02.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:02.843 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:02.843 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:02.843 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.843 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:02.843 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:02.843 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:02.843 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:03.102 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:10:03.102 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:03.102 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:03.102 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:03.102 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:03.102 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:03.102 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:03.102 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:03.102 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.102 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:03.103 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:03.103 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:03.103 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:03.362 00:10:03.362 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:03.362 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:03.362 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:03.621 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:03.621 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:03.621 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:03.621 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.621 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:03.880 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:03.880 { 00:10:03.880 "cntlid": 11, 00:10:03.880 "qid": 0, 00:10:03.880 "state": "enabled", 00:10:03.880 "thread": "nvmf_tgt_poll_group_000", 00:10:03.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:03.880 "listen_address": { 00:10:03.880 "trtype": "TCP", 00:10:03.880 "adrfam": "IPv4", 00:10:03.880 "traddr": "10.0.0.3", 00:10:03.880 "trsvcid": "4420" 00:10:03.880 }, 00:10:03.880 "peer_address": { 00:10:03.880 "trtype": "TCP", 00:10:03.880 "adrfam": "IPv4", 00:10:03.880 "traddr": "10.0.0.1", 00:10:03.880 "trsvcid": "50670" 00:10:03.880 }, 00:10:03.880 "auth": { 00:10:03.880 "state": "completed", 00:10:03.880 "digest": "sha256", 00:10:03.880 "dhgroup": "ffdhe2048" 00:10:03.880 } 00:10:03.880 } 00:10:03.880 ]' 00:10:03.880 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:03.880 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:03.880 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:03.880 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:03.880 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:03.880 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:03.880 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:03.880 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:04.139 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:10:04.139 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:10:05.074 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:05.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:05.074 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:05.074 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:05.074 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.074 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:05.074 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:05.074 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:05.074 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:05.333 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:10:05.333 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:05.333 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:05.333 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:05.333 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:05.333 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:05.333 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:05.333 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:05.333 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.333 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:05.333 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:05.333 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:05.333 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:05.593 00:10:05.593 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:05.593 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:05.593 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:05.852 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:05.852 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:05.852 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:05.852 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.852 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:05.852 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:05.852 { 00:10:05.852 "cntlid": 13, 00:10:05.852 "qid": 0, 00:10:05.852 "state": "enabled", 00:10:05.852 "thread": "nvmf_tgt_poll_group_000", 00:10:05.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:05.852 "listen_address": { 00:10:05.852 "trtype": "TCP", 00:10:05.852 "adrfam": "IPv4", 00:10:05.852 "traddr": "10.0.0.3", 00:10:05.852 "trsvcid": "4420" 00:10:05.852 }, 00:10:05.852 "peer_address": { 00:10:05.852 "trtype": "TCP", 00:10:05.852 "adrfam": "IPv4", 00:10:05.852 "traddr": "10.0.0.1", 00:10:05.852 "trsvcid": "50698" 00:10:05.852 }, 00:10:05.852 "auth": { 00:10:05.852 "state": "completed", 00:10:05.852 "digest": "sha256", 00:10:05.852 "dhgroup": "ffdhe2048" 00:10:05.852 } 00:10:05.852 } 00:10:05.852 ]' 00:10:05.852 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:06.111 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:06.111 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:06.111 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:06.111 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:06.111 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:06.111 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:06.111 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:06.370 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:10:06.371 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:10:06.939 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:06.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:06.939 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:06.939 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:06.939 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.198 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:07.198 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:07.198 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:07.198 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:07.457 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:10:07.457 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:07.457 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:07.457 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:07.457 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:07.457 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:07.457 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key3 00:10:07.457 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:07.457 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.457 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:07.457 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:07.457 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:07.457 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:07.715 00:10:07.715 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:07.715 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:07.715 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:07.974 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:07.974 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:07.974 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:07.974 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.974 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:07.974 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:07.974 { 00:10:07.974 "cntlid": 15, 00:10:07.974 "qid": 0, 00:10:07.974 "state": "enabled", 00:10:07.974 "thread": "nvmf_tgt_poll_group_000", 00:10:07.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:07.974 "listen_address": { 00:10:07.974 "trtype": "TCP", 00:10:07.974 "adrfam": "IPv4", 00:10:07.974 "traddr": "10.0.0.3", 00:10:07.974 "trsvcid": "4420" 00:10:07.974 }, 00:10:07.974 "peer_address": { 00:10:07.974 "trtype": "TCP", 00:10:07.974 "adrfam": "IPv4", 00:10:07.974 "traddr": "10.0.0.1", 00:10:07.974 "trsvcid": "50718" 00:10:07.974 }, 00:10:07.974 "auth": { 00:10:07.974 "state": "completed", 00:10:07.974 "digest": "sha256", 00:10:07.974 "dhgroup": "ffdhe2048" 00:10:07.974 } 00:10:07.974 } 00:10:07.974 ]' 00:10:07.974 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:07.974 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:07.974 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:07.974 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:07.974 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:08.233 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:08.233 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:08.233 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:08.491 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:10:08.491 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:10:09.058 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:09.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:09.058 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:09.058 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:09.058 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.058 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:09.058 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:09.058 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:09.058 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:09.058 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:09.317 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:10:09.317 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:09.317 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:09.317 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:09.317 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:09.317 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:09.317 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.317 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:09.317 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.317 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:09.317 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.317 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.317 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.885 00:10:09.885 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:09.885 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:09.885 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:10.144 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:10.144 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:10.144 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:10.144 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.144 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:10.144 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:10.144 { 00:10:10.144 "cntlid": 17, 00:10:10.144 "qid": 0, 00:10:10.144 "state": "enabled", 00:10:10.144 "thread": "nvmf_tgt_poll_group_000", 00:10:10.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:10.144 "listen_address": { 00:10:10.144 "trtype": "TCP", 00:10:10.144 "adrfam": "IPv4", 00:10:10.144 "traddr": "10.0.0.3", 00:10:10.144 "trsvcid": "4420" 00:10:10.144 }, 00:10:10.144 "peer_address": { 00:10:10.144 "trtype": "TCP", 00:10:10.144 "adrfam": "IPv4", 00:10:10.144 "traddr": "10.0.0.1", 00:10:10.144 "trsvcid": "50756" 00:10:10.144 }, 00:10:10.144 "auth": { 00:10:10.144 "state": "completed", 00:10:10.144 "digest": "sha256", 00:10:10.144 "dhgroup": "ffdhe3072" 00:10:10.144 } 00:10:10.144 } 00:10:10.144 ]' 00:10:10.144 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:10.144 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:10.144 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:10.144 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:10.144 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:10.144 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:10.144 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:10.144 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:10.711 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:10:10.711 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:10:11.277 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:11.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:11.277 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:11.277 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:11.277 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.277 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:11.277 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:11.277 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:11.277 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:11.535 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:10:11.535 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:11.535 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:11.535 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:11.535 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:11.535 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:11.535 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:11.535 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:11.535 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.535 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:11.535 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:11.535 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:11.535 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:11.793 00:10:12.051 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:12.051 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:12.051 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:12.310 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:12.310 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:12.310 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:12.310 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.310 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:12.310 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:12.310 { 00:10:12.310 "cntlid": 19, 00:10:12.310 "qid": 0, 00:10:12.310 "state": "enabled", 00:10:12.310 "thread": "nvmf_tgt_poll_group_000", 00:10:12.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:12.310 "listen_address": { 00:10:12.310 "trtype": "TCP", 00:10:12.310 "adrfam": "IPv4", 00:10:12.310 "traddr": "10.0.0.3", 00:10:12.310 "trsvcid": "4420" 00:10:12.310 }, 00:10:12.310 "peer_address": { 00:10:12.310 "trtype": "TCP", 00:10:12.310 "adrfam": "IPv4", 00:10:12.310 "traddr": "10.0.0.1", 00:10:12.310 "trsvcid": "50782" 00:10:12.310 }, 00:10:12.310 "auth": { 00:10:12.310 "state": "completed", 00:10:12.311 "digest": "sha256", 00:10:12.311 "dhgroup": "ffdhe3072" 00:10:12.311 } 00:10:12.311 } 00:10:12.311 ]' 00:10:12.311 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:12.311 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:12.311 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:12.311 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:12.311 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:12.311 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:12.311 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:12.311 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:12.882 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:10:12.882 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:10:13.448 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:13.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:13.448 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:13.448 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:13.448 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.448 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:13.448 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:13.448 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:13.448 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:13.706 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:10:13.706 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:13.706 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:13.706 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:13.706 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:13.706 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:13.706 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.706 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:13.706 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.706 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:13.706 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.706 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.706 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.965 00:10:13.965 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:13.965 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:13.965 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:14.223 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:14.223 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:14.223 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:14.223 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.483 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:14.483 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:14.483 { 00:10:14.483 "cntlid": 21, 00:10:14.483 "qid": 0, 00:10:14.483 "state": "enabled", 00:10:14.483 "thread": "nvmf_tgt_poll_group_000", 00:10:14.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:14.483 "listen_address": { 00:10:14.483 "trtype": "TCP", 00:10:14.483 "adrfam": "IPv4", 00:10:14.483 "traddr": "10.0.0.3", 00:10:14.483 "trsvcid": "4420" 00:10:14.483 }, 00:10:14.483 "peer_address": { 00:10:14.483 "trtype": "TCP", 00:10:14.483 "adrfam": "IPv4", 00:10:14.483 "traddr": "10.0.0.1", 00:10:14.483 "trsvcid": "43532" 00:10:14.483 }, 00:10:14.483 "auth": { 00:10:14.483 "state": "completed", 00:10:14.483 "digest": "sha256", 00:10:14.483 "dhgroup": "ffdhe3072" 00:10:14.483 } 00:10:14.483 } 00:10:14.483 ]' 00:10:14.483 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:14.483 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:14.483 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:14.483 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:14.483 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:14.483 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:14.483 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:14.483 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:14.743 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:10:14.743 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:10:15.678 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:15.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:15.678 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:15.678 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:15.678 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.678 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:15.678 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:15.678 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:15.678 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:15.678 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:10:15.678 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:15.678 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:15.678 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:15.678 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:15.678 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:15.678 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key3 00:10:15.678 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:15.678 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.938 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:15.939 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:15.939 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:15.939 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:16.209 00:10:16.209 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:16.209 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:16.209 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:16.468 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:16.468 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:16.468 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:16.468 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.468 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:16.468 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:16.468 { 00:10:16.468 "cntlid": 23, 00:10:16.468 "qid": 0, 00:10:16.468 "state": "enabled", 00:10:16.468 "thread": "nvmf_tgt_poll_group_000", 00:10:16.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:16.468 "listen_address": { 00:10:16.468 "trtype": "TCP", 00:10:16.468 "adrfam": "IPv4", 00:10:16.468 "traddr": "10.0.0.3", 00:10:16.468 "trsvcid": "4420" 00:10:16.468 }, 00:10:16.468 "peer_address": { 00:10:16.468 "trtype": "TCP", 00:10:16.468 "adrfam": "IPv4", 00:10:16.468 "traddr": "10.0.0.1", 00:10:16.468 "trsvcid": "43570" 00:10:16.468 }, 00:10:16.468 "auth": { 00:10:16.468 "state": "completed", 00:10:16.468 "digest": "sha256", 00:10:16.468 "dhgroup": "ffdhe3072" 00:10:16.468 } 00:10:16.468 } 00:10:16.468 ]' 00:10:16.468 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:16.726 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:16.726 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:16.726 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:16.726 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:16.726 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:16.726 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:16.726 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:16.984 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:10:16.984 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:10:17.919 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:17.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:17.919 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:17.920 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:17.920 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.920 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:17.920 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:17.920 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:17.920 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:17.920 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:18.205 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:10:18.205 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:18.205 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:18.205 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:18.205 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:18.205 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:18.205 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:18.205 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:18.205 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.205 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:18.205 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:18.205 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:18.205 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:18.476 00:10:18.476 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:18.476 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:18.476 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:19.044 08:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:19.045 08:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:19.045 08:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:19.045 08:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.045 08:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:19.045 08:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:19.045 { 00:10:19.045 "cntlid": 25, 00:10:19.045 "qid": 0, 00:10:19.045 "state": "enabled", 00:10:19.045 "thread": "nvmf_tgt_poll_group_000", 00:10:19.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:19.045 "listen_address": { 00:10:19.045 "trtype": "TCP", 00:10:19.045 "adrfam": "IPv4", 00:10:19.045 "traddr": "10.0.0.3", 00:10:19.045 "trsvcid": "4420" 00:10:19.045 }, 00:10:19.045 "peer_address": { 00:10:19.045 "trtype": "TCP", 00:10:19.045 "adrfam": "IPv4", 00:10:19.045 "traddr": "10.0.0.1", 00:10:19.045 "trsvcid": "43576" 00:10:19.045 }, 00:10:19.045 "auth": { 00:10:19.045 "state": "completed", 00:10:19.045 "digest": "sha256", 00:10:19.045 "dhgroup": "ffdhe4096" 00:10:19.045 } 00:10:19.045 } 00:10:19.045 ]' 00:10:19.045 08:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:19.045 08:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:19.045 08:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:19.045 08:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:19.045 08:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:19.045 08:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:19.045 08:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:19.045 08:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:19.304 08:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:10:19.304 08:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:10:20.237 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:20.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:20.238 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:20.238 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:20.238 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.238 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:20.238 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:20.238 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:20.238 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:20.238 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:10:20.238 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:20.238 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:20.238 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:20.238 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:20.238 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:20.238 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:20.238 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:20.238 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.496 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:20.496 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:20.496 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:20.496 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:20.754 00:10:20.754 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:20.754 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:20.754 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:21.012 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:21.012 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:21.012 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:21.012 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.012 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:21.012 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:21.012 { 00:10:21.012 "cntlid": 27, 00:10:21.012 "qid": 0, 00:10:21.012 "state": "enabled", 00:10:21.012 "thread": "nvmf_tgt_poll_group_000", 00:10:21.012 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:21.012 "listen_address": { 00:10:21.012 "trtype": "TCP", 00:10:21.012 "adrfam": "IPv4", 00:10:21.012 "traddr": "10.0.0.3", 00:10:21.012 "trsvcid": "4420" 00:10:21.012 }, 00:10:21.012 "peer_address": { 00:10:21.012 "trtype": "TCP", 00:10:21.012 "adrfam": "IPv4", 00:10:21.012 "traddr": "10.0.0.1", 00:10:21.012 "trsvcid": "43594" 00:10:21.012 }, 00:10:21.012 "auth": { 00:10:21.012 "state": "completed", 00:10:21.012 "digest": "sha256", 00:10:21.012 "dhgroup": "ffdhe4096" 00:10:21.012 } 00:10:21.012 } 00:10:21.012 ]' 00:10:21.012 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:21.012 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:21.012 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:21.270 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:21.271 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:21.271 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:21.271 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:21.271 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:21.529 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:10:21.529 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:10:22.463 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:22.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:22.463 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:22.463 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:22.463 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.463 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:22.463 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:22.463 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:22.463 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:22.721 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:10:22.721 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:22.721 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:22.721 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:22.721 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:22.721 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:22.721 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:22.721 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:22.721 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.721 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:22.721 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:22.721 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:22.721 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:22.979 00:10:22.979 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:22.979 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:22.979 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:23.237 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:23.495 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:23.495 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:23.495 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.496 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:23.496 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:23.496 { 00:10:23.496 "cntlid": 29, 00:10:23.496 "qid": 0, 00:10:23.496 "state": "enabled", 00:10:23.496 "thread": "nvmf_tgt_poll_group_000", 00:10:23.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:23.496 "listen_address": { 00:10:23.496 "trtype": "TCP", 00:10:23.496 "adrfam": "IPv4", 00:10:23.496 "traddr": "10.0.0.3", 00:10:23.496 "trsvcid": "4420" 00:10:23.496 }, 00:10:23.496 "peer_address": { 00:10:23.496 "trtype": "TCP", 00:10:23.496 "adrfam": "IPv4", 00:10:23.496 "traddr": "10.0.0.1", 00:10:23.496 "trsvcid": "43618" 00:10:23.496 }, 00:10:23.496 "auth": { 00:10:23.496 "state": "completed", 00:10:23.496 "digest": "sha256", 00:10:23.496 "dhgroup": "ffdhe4096" 00:10:23.496 } 00:10:23.496 } 00:10:23.496 ]' 00:10:23.496 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:23.496 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:23.496 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:23.496 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:23.496 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:23.496 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:23.496 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:23.496 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:23.754 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:10:23.754 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:10:24.685 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:24.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:24.685 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:24.686 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:24.686 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.686 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:24.686 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:24.686 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:24.686 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:24.953 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:10:24.953 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:24.953 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:24.953 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:24.953 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:24.953 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:24.953 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key3 00:10:24.953 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:24.953 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.953 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:24.953 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:24.953 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:24.953 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:25.224 00:10:25.224 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:25.224 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:25.224 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:25.482 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:25.482 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:25.482 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:25.482 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.482 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:25.482 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:25.482 { 00:10:25.482 "cntlid": 31, 00:10:25.482 "qid": 0, 00:10:25.482 "state": "enabled", 00:10:25.482 "thread": "nvmf_tgt_poll_group_000", 00:10:25.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:25.482 "listen_address": { 00:10:25.482 "trtype": "TCP", 00:10:25.482 "adrfam": "IPv4", 00:10:25.482 "traddr": "10.0.0.3", 00:10:25.482 "trsvcid": "4420" 00:10:25.482 }, 00:10:25.482 "peer_address": { 00:10:25.482 "trtype": "TCP", 00:10:25.482 "adrfam": "IPv4", 00:10:25.482 "traddr": "10.0.0.1", 00:10:25.482 "trsvcid": "44894" 00:10:25.482 }, 00:10:25.482 "auth": { 00:10:25.482 "state": "completed", 00:10:25.482 "digest": "sha256", 00:10:25.482 "dhgroup": "ffdhe4096" 00:10:25.482 } 00:10:25.482 } 00:10:25.482 ]' 00:10:25.482 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:25.482 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:25.482 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:25.740 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:25.740 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:25.740 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:25.740 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:25.740 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:25.998 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:10:25.998 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:10:26.564 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:26.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:26.564 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:26.564 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:26.564 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.564 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:26.564 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:26.564 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:26.564 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:26.564 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:27.130 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:10:27.130 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:27.130 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:27.130 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:27.130 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:27.130 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:27.130 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.130 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:27.130 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.130 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:27.130 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.130 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.130 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.388 00:10:27.388 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:27.388 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:27.388 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:27.647 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:27.647 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:27.647 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:27.647 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.647 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:27.647 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:27.647 { 00:10:27.647 "cntlid": 33, 00:10:27.647 "qid": 0, 00:10:27.647 "state": "enabled", 00:10:27.647 "thread": "nvmf_tgt_poll_group_000", 00:10:27.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:27.647 "listen_address": { 00:10:27.647 "trtype": "TCP", 00:10:27.647 "adrfam": "IPv4", 00:10:27.647 "traddr": "10.0.0.3", 00:10:27.647 "trsvcid": "4420" 00:10:27.647 }, 00:10:27.647 "peer_address": { 00:10:27.647 "trtype": "TCP", 00:10:27.647 "adrfam": "IPv4", 00:10:27.647 "traddr": "10.0.0.1", 00:10:27.647 "trsvcid": "44928" 00:10:27.647 }, 00:10:27.647 "auth": { 00:10:27.647 "state": "completed", 00:10:27.647 "digest": "sha256", 00:10:27.647 "dhgroup": "ffdhe6144" 00:10:27.647 } 00:10:27.647 } 00:10:27.647 ]' 00:10:27.647 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:27.905 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:27.905 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:27.905 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:27.905 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:27.905 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:27.905 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:27.905 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:28.163 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:10:28.163 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:10:29.095 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:29.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:29.095 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:29.095 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:29.095 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.095 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:29.095 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:29.095 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:29.095 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:29.095 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:10:29.096 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:29.096 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:29.096 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:29.096 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:29.096 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:29.096 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:29.096 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:29.096 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.096 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:29.096 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:29.096 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:29.096 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:29.662 00:10:29.662 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:29.662 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:29.662 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:30.228 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.228 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.228 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:30.228 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.228 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:30.228 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:30.228 { 00:10:30.228 "cntlid": 35, 00:10:30.228 "qid": 0, 00:10:30.228 "state": "enabled", 00:10:30.228 "thread": "nvmf_tgt_poll_group_000", 00:10:30.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:30.228 "listen_address": { 00:10:30.228 "trtype": "TCP", 00:10:30.228 "adrfam": "IPv4", 00:10:30.228 "traddr": "10.0.0.3", 00:10:30.228 "trsvcid": "4420" 00:10:30.228 }, 00:10:30.228 "peer_address": { 00:10:30.228 "trtype": "TCP", 00:10:30.228 "adrfam": "IPv4", 00:10:30.228 "traddr": "10.0.0.1", 00:10:30.228 "trsvcid": "44956" 00:10:30.228 }, 00:10:30.228 "auth": { 00:10:30.228 "state": "completed", 00:10:30.228 "digest": "sha256", 00:10:30.228 "dhgroup": "ffdhe6144" 00:10:30.228 } 00:10:30.228 } 00:10:30.228 ]' 00:10:30.228 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:30.228 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:30.228 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:30.228 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:30.228 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:30.228 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.228 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.228 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:30.485 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:10:30.485 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:10:31.052 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:31.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:31.052 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:31.052 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:31.052 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.052 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:31.052 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:31.052 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:31.052 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:31.649 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:10:31.649 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:31.649 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:31.649 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:31.649 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:31.649 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:31.649 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:31.649 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:31.649 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.649 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:31.649 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:31.649 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:31.649 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:31.907 00:10:31.907 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:31.907 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:31.907 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:32.165 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.166 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.166 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:32.166 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.166 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:32.166 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:32.166 { 00:10:32.166 "cntlid": 37, 00:10:32.166 "qid": 0, 00:10:32.166 "state": "enabled", 00:10:32.166 "thread": "nvmf_tgt_poll_group_000", 00:10:32.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:32.166 "listen_address": { 00:10:32.166 "trtype": "TCP", 00:10:32.166 "adrfam": "IPv4", 00:10:32.166 "traddr": "10.0.0.3", 00:10:32.166 "trsvcid": "4420" 00:10:32.166 }, 00:10:32.166 "peer_address": { 00:10:32.166 "trtype": "TCP", 00:10:32.166 "adrfam": "IPv4", 00:10:32.166 "traddr": "10.0.0.1", 00:10:32.166 "trsvcid": "44972" 00:10:32.166 }, 00:10:32.166 "auth": { 00:10:32.166 "state": "completed", 00:10:32.166 "digest": "sha256", 00:10:32.166 "dhgroup": "ffdhe6144" 00:10:32.166 } 00:10:32.166 } 00:10:32.166 ]' 00:10:32.166 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:32.424 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:32.424 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:32.424 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:32.424 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:32.424 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.424 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.424 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:32.681 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:10:32.681 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:10:33.617 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.617 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:33.617 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:33.617 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.617 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:33.617 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:33.617 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:33.617 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:33.617 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:10:33.617 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:33.617 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:33.617 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:33.617 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:33.617 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.617 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key3 00:10:33.617 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:33.617 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.617 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:33.617 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:33.617 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:33.617 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:34.185 00:10:34.185 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:34.185 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.185 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:34.444 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.444 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.444 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:34.444 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.444 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:34.444 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:34.444 { 00:10:34.444 "cntlid": 39, 00:10:34.444 "qid": 0, 00:10:34.444 "state": "enabled", 00:10:34.444 "thread": "nvmf_tgt_poll_group_000", 00:10:34.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:34.444 "listen_address": { 00:10:34.444 "trtype": "TCP", 00:10:34.444 "adrfam": "IPv4", 00:10:34.444 "traddr": "10.0.0.3", 00:10:34.444 "trsvcid": "4420" 00:10:34.444 }, 00:10:34.444 "peer_address": { 00:10:34.444 "trtype": "TCP", 00:10:34.444 "adrfam": "IPv4", 00:10:34.444 "traddr": "10.0.0.1", 00:10:34.444 "trsvcid": "42752" 00:10:34.444 }, 00:10:34.444 "auth": { 00:10:34.444 "state": "completed", 00:10:34.444 "digest": "sha256", 00:10:34.444 "dhgroup": "ffdhe6144" 00:10:34.444 } 00:10:34.444 } 00:10:34.444 ]' 00:10:34.444 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:34.444 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:34.444 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:34.703 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:34.703 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:34.703 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:34.703 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:34.703 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:34.961 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:10:34.961 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:10:35.528 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.528 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:35.528 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:35.528 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.528 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:35.528 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:35.528 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:35.528 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:35.528 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:35.787 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:10:35.787 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:35.787 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:35.787 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:35.787 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:35.787 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.787 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:35.787 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:35.787 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.045 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:36.045 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.045 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.045 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.612 00:10:36.612 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:36.612 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:36.612 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:36.871 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.871 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.871 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:36.871 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.871 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:36.871 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:36.871 { 00:10:36.871 "cntlid": 41, 00:10:36.871 "qid": 0, 00:10:36.871 "state": "enabled", 00:10:36.871 "thread": "nvmf_tgt_poll_group_000", 00:10:36.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:36.871 "listen_address": { 00:10:36.871 "trtype": "TCP", 00:10:36.871 "adrfam": "IPv4", 00:10:36.871 "traddr": "10.0.0.3", 00:10:36.871 "trsvcid": "4420" 00:10:36.871 }, 00:10:36.871 "peer_address": { 00:10:36.871 "trtype": "TCP", 00:10:36.871 "adrfam": "IPv4", 00:10:36.871 "traddr": "10.0.0.1", 00:10:36.871 "trsvcid": "42776" 00:10:36.871 }, 00:10:36.871 "auth": { 00:10:36.871 "state": "completed", 00:10:36.871 "digest": "sha256", 00:10:36.871 "dhgroup": "ffdhe8192" 00:10:36.871 } 00:10:36.871 } 00:10:36.871 ]' 00:10:36.871 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:36.871 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:36.871 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:36.871 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:36.871 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:37.129 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.129 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.129 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.388 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:10:37.388 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:10:37.956 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:37.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:37.956 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:37.956 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:37.956 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.956 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:37.956 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:37.956 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:37.956 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:38.215 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:10:38.215 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:38.215 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:38.216 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:38.216 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:38.216 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.216 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.216 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:38.216 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.216 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:38.216 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.216 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.216 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.782 00:10:38.782 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:38.782 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:38.782 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.349 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.349 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.349 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:39.349 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.349 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:39.349 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:39.349 { 00:10:39.349 "cntlid": 43, 00:10:39.349 "qid": 0, 00:10:39.349 "state": "enabled", 00:10:39.349 "thread": "nvmf_tgt_poll_group_000", 00:10:39.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:39.349 "listen_address": { 00:10:39.349 "trtype": "TCP", 00:10:39.349 "adrfam": "IPv4", 00:10:39.349 "traddr": "10.0.0.3", 00:10:39.349 "trsvcid": "4420" 00:10:39.349 }, 00:10:39.349 "peer_address": { 00:10:39.349 "trtype": "TCP", 00:10:39.349 "adrfam": "IPv4", 00:10:39.349 "traddr": "10.0.0.1", 00:10:39.349 "trsvcid": "42810" 00:10:39.349 }, 00:10:39.349 "auth": { 00:10:39.349 "state": "completed", 00:10:39.349 "digest": "sha256", 00:10:39.349 "dhgroup": "ffdhe8192" 00:10:39.349 } 00:10:39.349 } 00:10:39.349 ]' 00:10:39.349 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:39.349 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:39.349 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:39.349 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:39.350 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:39.350 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.350 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.350 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:39.609 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:10:39.609 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:10:40.543 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.543 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:40.543 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:40.543 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.543 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:40.543 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:40.543 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:40.543 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:40.543 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:10:40.543 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:40.543 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:40.543 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:40.543 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:40.543 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.543 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.543 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:40.543 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.543 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:40.543 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.543 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.543 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:41.477 00:10:41.477 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:41.477 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:41.477 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.477 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.477 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.477 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:41.477 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.735 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:41.735 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:41.735 { 00:10:41.735 "cntlid": 45, 00:10:41.735 "qid": 0, 00:10:41.735 "state": "enabled", 00:10:41.735 "thread": "nvmf_tgt_poll_group_000", 00:10:41.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:41.735 "listen_address": { 00:10:41.735 "trtype": "TCP", 00:10:41.735 "adrfam": "IPv4", 00:10:41.735 "traddr": "10.0.0.3", 00:10:41.735 "trsvcid": "4420" 00:10:41.735 }, 00:10:41.735 "peer_address": { 00:10:41.735 "trtype": "TCP", 00:10:41.735 "adrfam": "IPv4", 00:10:41.735 "traddr": "10.0.0.1", 00:10:41.735 "trsvcid": "42836" 00:10:41.735 }, 00:10:41.735 "auth": { 00:10:41.735 "state": "completed", 00:10:41.735 "digest": "sha256", 00:10:41.735 "dhgroup": "ffdhe8192" 00:10:41.735 } 00:10:41.735 } 00:10:41.735 ]' 00:10:41.735 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:41.735 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:41.736 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:41.736 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:41.736 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:41.736 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:41.736 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:41.736 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:41.994 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:10:41.994 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:10:42.563 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.563 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:42.563 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:42.563 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.563 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:42.563 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:42.563 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:42.563 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:42.823 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:10:42.823 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:42.823 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:42.823 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:42.823 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:42.823 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.823 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key3 00:10:42.823 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:42.823 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.081 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:43.081 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:43.081 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:43.081 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:43.648 00:10:43.648 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:43.648 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:43.648 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.906 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:43.906 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:43.906 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:43.906 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.906 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:43.906 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:43.906 { 00:10:43.906 "cntlid": 47, 00:10:43.906 "qid": 0, 00:10:43.906 "state": "enabled", 00:10:43.906 "thread": "nvmf_tgt_poll_group_000", 00:10:43.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:43.906 "listen_address": { 00:10:43.906 "trtype": "TCP", 00:10:43.906 "adrfam": "IPv4", 00:10:43.906 "traddr": "10.0.0.3", 00:10:43.906 "trsvcid": "4420" 00:10:43.906 }, 00:10:43.906 "peer_address": { 00:10:43.906 "trtype": "TCP", 00:10:43.906 "adrfam": "IPv4", 00:10:43.906 "traddr": "10.0.0.1", 00:10:43.906 "trsvcid": "36004" 00:10:43.906 }, 00:10:43.906 "auth": { 00:10:43.906 "state": "completed", 00:10:43.906 "digest": "sha256", 00:10:43.906 "dhgroup": "ffdhe8192" 00:10:43.906 } 00:10:43.906 } 00:10:43.906 ]' 00:10:43.906 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:43.906 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:43.906 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:43.906 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:43.906 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:44.164 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:44.164 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:44.164 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:44.423 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:10:44.423 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:10:44.989 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:44.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:44.989 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:44.989 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:44.989 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.989 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:44.989 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:44.989 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:44.989 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:44.989 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:44.989 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:45.248 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:10:45.248 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:45.248 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:45.248 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:45.248 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:45.248 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:45.248 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.248 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:45.248 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.248 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:45.248 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.248 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.248 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.520 00:10:45.797 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:45.797 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.797 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:45.797 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.797 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.797 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:45.797 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.797 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:45.798 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:45.798 { 00:10:45.798 "cntlid": 49, 00:10:45.798 "qid": 0, 00:10:45.798 "state": "enabled", 00:10:45.798 "thread": "nvmf_tgt_poll_group_000", 00:10:45.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:45.798 "listen_address": { 00:10:45.798 "trtype": "TCP", 00:10:45.798 "adrfam": "IPv4", 00:10:45.798 "traddr": "10.0.0.3", 00:10:45.798 "trsvcid": "4420" 00:10:45.798 }, 00:10:45.798 "peer_address": { 00:10:45.798 "trtype": "TCP", 00:10:45.798 "adrfam": "IPv4", 00:10:45.798 "traddr": "10.0.0.1", 00:10:45.798 "trsvcid": "36042" 00:10:45.798 }, 00:10:45.798 "auth": { 00:10:45.798 "state": "completed", 00:10:45.798 "digest": "sha384", 00:10:45.798 "dhgroup": "null" 00:10:45.798 } 00:10:45.798 } 00:10:45.798 ]' 00:10:45.798 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:46.056 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:46.056 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:46.056 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:46.056 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:46.056 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.056 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.056 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:46.314 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:10:46.314 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:10:47.251 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:47.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:47.251 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:47.251 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:47.251 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.251 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:47.251 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:47.251 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:47.251 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:47.251 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:10:47.251 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:47.251 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:47.251 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:47.251 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:47.251 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.251 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.251 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:47.251 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.251 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:47.251 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.251 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.251 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.818 00:10:47.818 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:47.818 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:47.818 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:48.077 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.077 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.077 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:48.077 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.077 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:48.077 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:48.077 { 00:10:48.077 "cntlid": 51, 00:10:48.077 "qid": 0, 00:10:48.077 "state": "enabled", 00:10:48.077 "thread": "nvmf_tgt_poll_group_000", 00:10:48.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:48.077 "listen_address": { 00:10:48.077 "trtype": "TCP", 00:10:48.077 "adrfam": "IPv4", 00:10:48.077 "traddr": "10.0.0.3", 00:10:48.077 "trsvcid": "4420" 00:10:48.077 }, 00:10:48.077 "peer_address": { 00:10:48.077 "trtype": "TCP", 00:10:48.077 "adrfam": "IPv4", 00:10:48.077 "traddr": "10.0.0.1", 00:10:48.077 "trsvcid": "36058" 00:10:48.077 }, 00:10:48.077 "auth": { 00:10:48.077 "state": "completed", 00:10:48.077 "digest": "sha384", 00:10:48.077 "dhgroup": "null" 00:10:48.077 } 00:10:48.077 } 00:10:48.077 ]' 00:10:48.077 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:48.077 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:48.077 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:48.077 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:48.077 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:48.078 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.078 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.078 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.336 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:10:48.336 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:10:49.272 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:49.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:49.272 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:49.272 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:49.272 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.272 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:49.272 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:49.272 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:49.272 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:49.530 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:10:49.530 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:49.530 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:49.530 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:49.530 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:49.530 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.530 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.530 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:49.530 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.530 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:49.530 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.530 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.530 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.789 00:10:49.789 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:49.789 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:49.789 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.047 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:50.047 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:50.047 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:50.047 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.047 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:50.047 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:50.047 { 00:10:50.047 "cntlid": 53, 00:10:50.047 "qid": 0, 00:10:50.047 "state": "enabled", 00:10:50.047 "thread": "nvmf_tgt_poll_group_000", 00:10:50.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:50.047 "listen_address": { 00:10:50.047 "trtype": "TCP", 00:10:50.047 "adrfam": "IPv4", 00:10:50.047 "traddr": "10.0.0.3", 00:10:50.047 "trsvcid": "4420" 00:10:50.047 }, 00:10:50.047 "peer_address": { 00:10:50.047 "trtype": "TCP", 00:10:50.047 "adrfam": "IPv4", 00:10:50.047 "traddr": "10.0.0.1", 00:10:50.047 "trsvcid": "36088" 00:10:50.047 }, 00:10:50.047 "auth": { 00:10:50.047 "state": "completed", 00:10:50.047 "digest": "sha384", 00:10:50.047 "dhgroup": "null" 00:10:50.047 } 00:10:50.047 } 00:10:50.047 ]' 00:10:50.047 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:50.047 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:50.047 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:50.047 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:50.047 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:50.305 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:50.305 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:50.305 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:50.563 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:10:50.563 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:10:51.163 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:51.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:51.163 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:51.163 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:51.163 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.163 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:51.163 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:51.163 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:51.163 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:51.422 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:10:51.422 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:51.422 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:51.422 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:51.422 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:51.422 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.422 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key3 00:10:51.422 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:51.422 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.422 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:51.422 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:51.422 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:51.422 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:51.681 00:10:51.681 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:51.681 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:51.681 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:52.247 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:52.247 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:52.247 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:52.247 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.247 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:52.247 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:52.247 { 00:10:52.247 "cntlid": 55, 00:10:52.247 "qid": 0, 00:10:52.247 "state": "enabled", 00:10:52.247 "thread": "nvmf_tgt_poll_group_000", 00:10:52.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:52.247 "listen_address": { 00:10:52.247 "trtype": "TCP", 00:10:52.247 "adrfam": "IPv4", 00:10:52.247 "traddr": "10.0.0.3", 00:10:52.247 "trsvcid": "4420" 00:10:52.247 }, 00:10:52.247 "peer_address": { 00:10:52.247 "trtype": "TCP", 00:10:52.247 "adrfam": "IPv4", 00:10:52.247 "traddr": "10.0.0.1", 00:10:52.247 "trsvcid": "36110" 00:10:52.247 }, 00:10:52.247 "auth": { 00:10:52.247 "state": "completed", 00:10:52.247 "digest": "sha384", 00:10:52.247 "dhgroup": "null" 00:10:52.247 } 00:10:52.247 } 00:10:52.247 ]' 00:10:52.247 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:52.247 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:52.247 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:52.247 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:52.247 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:52.247 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:52.247 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:52.247 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.506 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:10:52.506 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:10:53.440 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.440 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:53.440 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:53.440 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.440 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:53.440 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:53.440 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:53.440 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:53.440 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:53.440 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:10:53.440 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:53.440 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:53.440 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:53.440 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:53.440 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:53.440 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.440 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:53.440 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.699 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:53.699 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.699 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.699 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.958 00:10:53.958 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:53.958 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:53.958 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:54.216 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.216 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.216 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:54.216 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.216 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:54.216 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:54.216 { 00:10:54.216 "cntlid": 57, 00:10:54.216 "qid": 0, 00:10:54.216 "state": "enabled", 00:10:54.216 "thread": "nvmf_tgt_poll_group_000", 00:10:54.216 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:54.216 "listen_address": { 00:10:54.216 "trtype": "TCP", 00:10:54.216 "adrfam": "IPv4", 00:10:54.216 "traddr": "10.0.0.3", 00:10:54.216 "trsvcid": "4420" 00:10:54.216 }, 00:10:54.216 "peer_address": { 00:10:54.216 "trtype": "TCP", 00:10:54.216 "adrfam": "IPv4", 00:10:54.216 "traddr": "10.0.0.1", 00:10:54.216 "trsvcid": "37806" 00:10:54.216 }, 00:10:54.216 "auth": { 00:10:54.216 "state": "completed", 00:10:54.216 "digest": "sha384", 00:10:54.216 "dhgroup": "ffdhe2048" 00:10:54.216 } 00:10:54.216 } 00:10:54.216 ]' 00:10:54.216 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:54.216 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:54.216 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:54.216 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:54.216 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:54.474 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.474 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.474 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.732 08:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:10:54.732 08:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:10:55.303 08:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.303 08:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:55.303 08:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:55.303 08:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.303 08:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:55.303 08:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:55.304 08:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:55.304 08:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:55.567 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:10:55.567 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:55.567 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:55.567 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:55.567 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:55.567 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.567 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.567 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:55.567 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.567 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:55.567 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.567 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.567 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.825 00:10:55.826 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:55.826 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.826 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:56.392 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.392 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.392 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:56.392 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.392 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:56.392 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:56.392 { 00:10:56.392 "cntlid": 59, 00:10:56.392 "qid": 0, 00:10:56.392 "state": "enabled", 00:10:56.392 "thread": "nvmf_tgt_poll_group_000", 00:10:56.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:56.392 "listen_address": { 00:10:56.392 "trtype": "TCP", 00:10:56.392 "adrfam": "IPv4", 00:10:56.392 "traddr": "10.0.0.3", 00:10:56.392 "trsvcid": "4420" 00:10:56.392 }, 00:10:56.392 "peer_address": { 00:10:56.392 "trtype": "TCP", 00:10:56.392 "adrfam": "IPv4", 00:10:56.392 "traddr": "10.0.0.1", 00:10:56.392 "trsvcid": "37848" 00:10:56.392 }, 00:10:56.392 "auth": { 00:10:56.392 "state": "completed", 00:10:56.392 "digest": "sha384", 00:10:56.392 "dhgroup": "ffdhe2048" 00:10:56.392 } 00:10:56.392 } 00:10:56.392 ]' 00:10:56.392 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:56.392 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:56.392 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:56.392 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:56.392 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:56.392 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.392 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.392 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.650 08:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:10:56.650 08:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:10:57.237 08:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.496 08:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:57.496 08:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:57.496 08:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.496 08:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:57.496 08:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:57.496 08:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:57.496 08:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:57.757 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:10:57.757 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:57.757 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:57.757 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:57.757 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:57.757 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.757 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.757 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:57.757 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.757 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:57.757 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.757 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.757 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.016 00:10:58.016 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:58.016 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:58.016 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.275 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.275 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.275 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:58.275 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.275 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:58.275 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:58.275 { 00:10:58.275 "cntlid": 61, 00:10:58.275 "qid": 0, 00:10:58.275 "state": "enabled", 00:10:58.275 "thread": "nvmf_tgt_poll_group_000", 00:10:58.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:10:58.275 "listen_address": { 00:10:58.275 "trtype": "TCP", 00:10:58.276 "adrfam": "IPv4", 00:10:58.276 "traddr": "10.0.0.3", 00:10:58.276 "trsvcid": "4420" 00:10:58.276 }, 00:10:58.276 "peer_address": { 00:10:58.276 "trtype": "TCP", 00:10:58.276 "adrfam": "IPv4", 00:10:58.276 "traddr": "10.0.0.1", 00:10:58.276 "trsvcid": "37864" 00:10:58.276 }, 00:10:58.276 "auth": { 00:10:58.276 "state": "completed", 00:10:58.276 "digest": "sha384", 00:10:58.276 "dhgroup": "ffdhe2048" 00:10:58.276 } 00:10:58.276 } 00:10:58.276 ]' 00:10:58.276 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:58.534 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:58.534 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:58.534 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:58.534 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:58.534 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.534 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.534 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.793 08:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:10:58.793 08:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:10:59.360 08:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.360 08:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:10:59.360 08:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:59.360 08:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.360 08:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:59.360 08:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:59.360 08:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:59.360 08:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:59.619 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:10:59.619 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:59.619 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:59.619 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:59.619 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:59.619 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.619 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key3 00:10:59.619 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:10:59.619 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.619 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:10:59.619 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:59.619 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:59.619 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:00.186 00:11:00.186 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:00.186 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:00.186 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.444 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.445 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.445 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:00.445 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.445 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:00.445 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:00.445 { 00:11:00.445 "cntlid": 63, 00:11:00.445 "qid": 0, 00:11:00.445 "state": "enabled", 00:11:00.445 "thread": "nvmf_tgt_poll_group_000", 00:11:00.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:00.445 "listen_address": { 00:11:00.445 "trtype": "TCP", 00:11:00.445 "adrfam": "IPv4", 00:11:00.445 "traddr": "10.0.0.3", 00:11:00.445 "trsvcid": "4420" 00:11:00.445 }, 00:11:00.445 "peer_address": { 00:11:00.445 "trtype": "TCP", 00:11:00.445 "adrfam": "IPv4", 00:11:00.445 "traddr": "10.0.0.1", 00:11:00.445 "trsvcid": "37892" 00:11:00.445 }, 00:11:00.445 "auth": { 00:11:00.445 "state": "completed", 00:11:00.445 "digest": "sha384", 00:11:00.445 "dhgroup": "ffdhe2048" 00:11:00.445 } 00:11:00.445 } 00:11:00.445 ]' 00:11:00.445 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:00.445 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:00.445 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:00.445 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:00.445 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:00.445 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.445 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.445 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:00.703 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:11:00.703 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:11:01.637 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.637 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:01.637 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:01.637 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.637 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:01.637 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:01.637 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:01.637 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:01.637 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:01.895 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:11:01.895 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:01.895 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:01.895 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:01.895 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:01.895 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.895 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.895 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:01.895 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.895 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:01.895 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.895 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.895 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.153 00:11:02.153 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:02.153 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:02.153 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.412 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.412 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.412 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:02.412 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.412 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:02.412 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:02.412 { 00:11:02.412 "cntlid": 65, 00:11:02.412 "qid": 0, 00:11:02.412 "state": "enabled", 00:11:02.412 "thread": "nvmf_tgt_poll_group_000", 00:11:02.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:02.412 "listen_address": { 00:11:02.412 "trtype": "TCP", 00:11:02.412 "adrfam": "IPv4", 00:11:02.412 "traddr": "10.0.0.3", 00:11:02.412 "trsvcid": "4420" 00:11:02.412 }, 00:11:02.412 "peer_address": { 00:11:02.412 "trtype": "TCP", 00:11:02.412 "adrfam": "IPv4", 00:11:02.412 "traddr": "10.0.0.1", 00:11:02.412 "trsvcid": "37916" 00:11:02.412 }, 00:11:02.412 "auth": { 00:11:02.412 "state": "completed", 00:11:02.412 "digest": "sha384", 00:11:02.412 "dhgroup": "ffdhe3072" 00:11:02.412 } 00:11:02.412 } 00:11:02.412 ]' 00:11:02.412 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:02.412 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:02.412 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:02.412 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:02.412 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:02.679 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.679 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.679 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.951 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:11:02.951 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:11:03.516 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.516 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:03.516 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:03.516 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.516 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:03.516 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:03.516 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:03.516 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:03.774 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:11:03.774 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:03.774 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:03.774 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:03.774 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:03.774 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.774 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.774 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:03.774 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.774 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:03.774 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.775 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.775 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.340 00:11:04.340 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:04.340 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.340 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:04.598 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.598 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.598 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:04.598 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.598 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:04.598 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:04.598 { 00:11:04.598 "cntlid": 67, 00:11:04.598 "qid": 0, 00:11:04.598 "state": "enabled", 00:11:04.598 "thread": "nvmf_tgt_poll_group_000", 00:11:04.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:04.598 "listen_address": { 00:11:04.598 "trtype": "TCP", 00:11:04.598 "adrfam": "IPv4", 00:11:04.598 "traddr": "10.0.0.3", 00:11:04.598 "trsvcid": "4420" 00:11:04.598 }, 00:11:04.598 "peer_address": { 00:11:04.598 "trtype": "TCP", 00:11:04.598 "adrfam": "IPv4", 00:11:04.598 "traddr": "10.0.0.1", 00:11:04.598 "trsvcid": "36038" 00:11:04.598 }, 00:11:04.598 "auth": { 00:11:04.598 "state": "completed", 00:11:04.598 "digest": "sha384", 00:11:04.598 "dhgroup": "ffdhe3072" 00:11:04.598 } 00:11:04.598 } 00:11:04.598 ]' 00:11:04.598 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:04.598 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:04.598 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:04.598 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:04.598 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:04.598 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.598 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.598 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.856 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:11:04.856 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:11:05.790 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.790 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:05.790 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:05.790 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.790 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:05.790 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:05.791 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:05.791 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:06.049 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:11:06.049 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:06.049 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:06.049 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:06.049 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:06.049 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.049 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.049 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:06.049 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.049 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:06.049 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.049 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.049 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.306 00:11:06.306 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:06.306 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.306 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:06.564 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.564 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.564 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:06.564 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.564 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:06.564 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:06.564 { 00:11:06.564 "cntlid": 69, 00:11:06.564 "qid": 0, 00:11:06.564 "state": "enabled", 00:11:06.564 "thread": "nvmf_tgt_poll_group_000", 00:11:06.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:06.564 "listen_address": { 00:11:06.564 "trtype": "TCP", 00:11:06.564 "adrfam": "IPv4", 00:11:06.564 "traddr": "10.0.0.3", 00:11:06.564 "trsvcid": "4420" 00:11:06.564 }, 00:11:06.564 "peer_address": { 00:11:06.564 "trtype": "TCP", 00:11:06.564 "adrfam": "IPv4", 00:11:06.564 "traddr": "10.0.0.1", 00:11:06.564 "trsvcid": "36060" 00:11:06.564 }, 00:11:06.564 "auth": { 00:11:06.564 "state": "completed", 00:11:06.564 "digest": "sha384", 00:11:06.564 "dhgroup": "ffdhe3072" 00:11:06.564 } 00:11:06.564 } 00:11:06.564 ]' 00:11:06.564 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:06.822 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:06.822 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:06.822 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:06.822 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:06.822 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.822 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.822 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.079 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:11:07.079 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:11:08.013 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.013 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:08.013 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:08.013 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.013 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:08.013 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:08.013 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:08.013 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:08.329 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:11:08.329 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:08.329 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:08.329 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:08.329 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:08.329 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.329 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key3 00:11:08.329 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:08.329 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.329 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:08.329 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:08.329 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:08.329 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:08.587 00:11:08.587 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:08.587 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.587 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:08.845 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.845 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.845 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:08.845 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.103 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:09.103 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:09.103 { 00:11:09.103 "cntlid": 71, 00:11:09.103 "qid": 0, 00:11:09.103 "state": "enabled", 00:11:09.103 "thread": "nvmf_tgt_poll_group_000", 00:11:09.103 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:09.103 "listen_address": { 00:11:09.103 "trtype": "TCP", 00:11:09.103 "adrfam": "IPv4", 00:11:09.103 "traddr": "10.0.0.3", 00:11:09.103 "trsvcid": "4420" 00:11:09.103 }, 00:11:09.103 "peer_address": { 00:11:09.103 "trtype": "TCP", 00:11:09.103 "adrfam": "IPv4", 00:11:09.103 "traddr": "10.0.0.1", 00:11:09.103 "trsvcid": "36100" 00:11:09.103 }, 00:11:09.103 "auth": { 00:11:09.103 "state": "completed", 00:11:09.103 "digest": "sha384", 00:11:09.103 "dhgroup": "ffdhe3072" 00:11:09.103 } 00:11:09.103 } 00:11:09.103 ]' 00:11:09.103 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:09.103 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:09.103 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:09.103 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:09.103 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:09.103 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.103 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.103 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.670 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:11:09.670 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:11:10.236 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.236 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:10.236 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:10.236 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.236 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:10.236 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:10.236 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:10.236 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:10.236 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:10.494 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:11:10.494 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:10.494 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:10.494 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:10.494 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:10.494 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.494 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.494 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:10.494 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.495 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:10.495 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.495 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.495 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:11.062 00:11:11.062 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:11.062 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:11.062 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.321 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.321 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.321 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:11.321 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.321 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:11.321 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:11.321 { 00:11:11.321 "cntlid": 73, 00:11:11.321 "qid": 0, 00:11:11.321 "state": "enabled", 00:11:11.321 "thread": "nvmf_tgt_poll_group_000", 00:11:11.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:11.321 "listen_address": { 00:11:11.321 "trtype": "TCP", 00:11:11.321 "adrfam": "IPv4", 00:11:11.321 "traddr": "10.0.0.3", 00:11:11.321 "trsvcid": "4420" 00:11:11.321 }, 00:11:11.321 "peer_address": { 00:11:11.321 "trtype": "TCP", 00:11:11.321 "adrfam": "IPv4", 00:11:11.321 "traddr": "10.0.0.1", 00:11:11.321 "trsvcid": "36116" 00:11:11.321 }, 00:11:11.321 "auth": { 00:11:11.321 "state": "completed", 00:11:11.321 "digest": "sha384", 00:11:11.321 "dhgroup": "ffdhe4096" 00:11:11.321 } 00:11:11.321 } 00:11:11.321 ]' 00:11:11.321 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:11.321 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:11.321 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:11.321 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:11.321 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:11.321 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.321 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.321 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.888 08:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:11:11.888 08:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:11:12.456 08:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.456 08:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:12.456 08:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:12.456 08:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.456 08:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:12.456 08:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:12.456 08:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:12.456 08:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:13.028 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:11:13.028 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:13.028 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:13.028 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:13.028 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:13.028 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.028 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.028 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:13.028 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.028 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:13.029 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.029 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.029 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.287 00:11:13.287 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:13.287 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:13.287 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.853 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.853 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.853 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:13.853 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.853 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:13.853 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:13.853 { 00:11:13.853 "cntlid": 75, 00:11:13.853 "qid": 0, 00:11:13.853 "state": "enabled", 00:11:13.853 "thread": "nvmf_tgt_poll_group_000", 00:11:13.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:13.853 "listen_address": { 00:11:13.853 "trtype": "TCP", 00:11:13.853 "adrfam": "IPv4", 00:11:13.853 "traddr": "10.0.0.3", 00:11:13.854 "trsvcid": "4420" 00:11:13.854 }, 00:11:13.854 "peer_address": { 00:11:13.854 "trtype": "TCP", 00:11:13.854 "adrfam": "IPv4", 00:11:13.854 "traddr": "10.0.0.1", 00:11:13.854 "trsvcid": "50822" 00:11:13.854 }, 00:11:13.854 "auth": { 00:11:13.854 "state": "completed", 00:11:13.854 "digest": "sha384", 00:11:13.854 "dhgroup": "ffdhe4096" 00:11:13.854 } 00:11:13.854 } 00:11:13.854 ]' 00:11:13.854 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:13.854 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:13.854 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:13.854 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:13.854 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:13.854 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.854 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.854 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.112 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:11:14.112 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:11:15.058 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.058 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:15.058 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:15.058 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.058 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:15.058 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:15.058 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:15.058 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:15.316 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:11:15.316 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:15.316 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:15.316 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:15.316 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:15.316 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.316 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.316 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:15.316 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.316 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:15.316 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.316 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.316 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.883 00:11:15.883 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:15.883 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:15.883 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.883 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.883 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.883 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:15.883 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.883 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:15.883 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:15.883 { 00:11:15.883 "cntlid": 77, 00:11:15.883 "qid": 0, 00:11:15.883 "state": "enabled", 00:11:15.883 "thread": "nvmf_tgt_poll_group_000", 00:11:15.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:15.883 "listen_address": { 00:11:15.883 "trtype": "TCP", 00:11:15.883 "adrfam": "IPv4", 00:11:15.883 "traddr": "10.0.0.3", 00:11:15.883 "trsvcid": "4420" 00:11:15.883 }, 00:11:15.883 "peer_address": { 00:11:15.883 "trtype": "TCP", 00:11:15.883 "adrfam": "IPv4", 00:11:15.883 "traddr": "10.0.0.1", 00:11:15.883 "trsvcid": "50848" 00:11:15.883 }, 00:11:15.883 "auth": { 00:11:15.883 "state": "completed", 00:11:15.883 "digest": "sha384", 00:11:15.883 "dhgroup": "ffdhe4096" 00:11:15.883 } 00:11:15.883 } 00:11:15.883 ]' 00:11:15.883 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:16.141 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:16.141 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:16.141 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:16.141 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:16.141 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.141 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.141 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.400 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:11:16.400 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:11:17.335 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.335 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:17.335 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:17.335 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.335 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:17.335 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:17.335 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:17.335 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:17.594 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:11:17.594 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:17.594 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:17.594 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:17.594 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:17.594 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.594 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key3 00:11:17.594 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:17.594 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.594 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:17.594 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:17.594 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:17.594 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:17.853 00:11:17.853 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:17.853 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.853 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:18.420 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.420 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.420 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:18.420 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.420 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:18.420 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:18.420 { 00:11:18.420 "cntlid": 79, 00:11:18.420 "qid": 0, 00:11:18.420 "state": "enabled", 00:11:18.420 "thread": "nvmf_tgt_poll_group_000", 00:11:18.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:18.420 "listen_address": { 00:11:18.420 "trtype": "TCP", 00:11:18.420 "adrfam": "IPv4", 00:11:18.420 "traddr": "10.0.0.3", 00:11:18.420 "trsvcid": "4420" 00:11:18.420 }, 00:11:18.420 "peer_address": { 00:11:18.420 "trtype": "TCP", 00:11:18.420 "adrfam": "IPv4", 00:11:18.420 "traddr": "10.0.0.1", 00:11:18.420 "trsvcid": "50876" 00:11:18.420 }, 00:11:18.420 "auth": { 00:11:18.420 "state": "completed", 00:11:18.420 "digest": "sha384", 00:11:18.420 "dhgroup": "ffdhe4096" 00:11:18.420 } 00:11:18.420 } 00:11:18.420 ]' 00:11:18.420 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:18.421 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:18.421 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:18.421 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:18.421 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:18.421 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.421 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.421 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.679 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:11:18.679 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:11:19.269 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.527 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:19.527 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:19.527 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.527 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:19.527 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:19.527 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:19.527 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:19.527 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:19.785 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:11:19.786 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:19.786 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:19.786 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:19.786 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:19.786 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.786 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.786 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:19.786 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.786 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:19.786 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.786 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.786 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.044 00:11:20.044 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:20.044 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:20.044 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.612 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.612 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.612 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:20.612 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.612 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:20.612 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:20.612 { 00:11:20.612 "cntlid": 81, 00:11:20.612 "qid": 0, 00:11:20.612 "state": "enabled", 00:11:20.612 "thread": "nvmf_tgt_poll_group_000", 00:11:20.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:20.612 "listen_address": { 00:11:20.612 "trtype": "TCP", 00:11:20.612 "adrfam": "IPv4", 00:11:20.612 "traddr": "10.0.0.3", 00:11:20.612 "trsvcid": "4420" 00:11:20.612 }, 00:11:20.612 "peer_address": { 00:11:20.612 "trtype": "TCP", 00:11:20.612 "adrfam": "IPv4", 00:11:20.612 "traddr": "10.0.0.1", 00:11:20.612 "trsvcid": "50894" 00:11:20.612 }, 00:11:20.612 "auth": { 00:11:20.612 "state": "completed", 00:11:20.612 "digest": "sha384", 00:11:20.612 "dhgroup": "ffdhe6144" 00:11:20.612 } 00:11:20.612 } 00:11:20.612 ]' 00:11:20.612 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:20.612 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:20.612 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:20.612 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:20.612 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:20.612 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.612 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.612 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.871 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:11:20.871 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:11:21.438 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.438 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:21.438 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:21.438 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.698 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:21.698 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:21.698 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:21.698 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:21.957 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:11:21.957 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:21.957 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:21.957 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:21.957 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:21.957 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.957 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.957 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:21.957 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.957 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:21.957 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.957 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.957 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.215 00:11:22.473 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:22.473 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.473 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:22.732 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.732 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.732 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:22.732 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.732 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:22.732 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:22.732 { 00:11:22.732 "cntlid": 83, 00:11:22.732 "qid": 0, 00:11:22.732 "state": "enabled", 00:11:22.732 "thread": "nvmf_tgt_poll_group_000", 00:11:22.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:22.732 "listen_address": { 00:11:22.732 "trtype": "TCP", 00:11:22.732 "adrfam": "IPv4", 00:11:22.732 "traddr": "10.0.0.3", 00:11:22.732 "trsvcid": "4420" 00:11:22.732 }, 00:11:22.732 "peer_address": { 00:11:22.732 "trtype": "TCP", 00:11:22.732 "adrfam": "IPv4", 00:11:22.732 "traddr": "10.0.0.1", 00:11:22.732 "trsvcid": "50926" 00:11:22.732 }, 00:11:22.732 "auth": { 00:11:22.732 "state": "completed", 00:11:22.732 "digest": "sha384", 00:11:22.732 "dhgroup": "ffdhe6144" 00:11:22.732 } 00:11:22.732 } 00:11:22.732 ]' 00:11:22.732 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:22.732 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:22.732 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:22.732 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:22.732 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:22.732 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.732 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.732 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.300 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:11:23.300 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:11:23.866 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.866 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:23.866 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:23.866 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.866 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:23.866 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:23.866 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:23.866 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:24.124 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:11:24.124 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:24.124 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:24.124 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:24.124 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:24.124 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.124 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.124 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:24.124 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.124 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:24.124 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.124 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.124 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.691 00:11:24.691 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:24.691 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:24.691 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.949 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.949 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.950 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:24.950 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.950 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:24.950 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:24.950 { 00:11:24.950 "cntlid": 85, 00:11:24.950 "qid": 0, 00:11:24.950 "state": "enabled", 00:11:24.950 "thread": "nvmf_tgt_poll_group_000", 00:11:24.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:24.950 "listen_address": { 00:11:24.950 "trtype": "TCP", 00:11:24.950 "adrfam": "IPv4", 00:11:24.950 "traddr": "10.0.0.3", 00:11:24.950 "trsvcid": "4420" 00:11:24.950 }, 00:11:24.950 "peer_address": { 00:11:24.950 "trtype": "TCP", 00:11:24.950 "adrfam": "IPv4", 00:11:24.950 "traddr": "10.0.0.1", 00:11:24.950 "trsvcid": "38390" 00:11:24.950 }, 00:11:24.950 "auth": { 00:11:24.950 "state": "completed", 00:11:24.950 "digest": "sha384", 00:11:24.950 "dhgroup": "ffdhe6144" 00:11:24.950 } 00:11:24.950 } 00:11:24.950 ]' 00:11:24.950 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:24.950 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:24.950 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:24.950 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:24.950 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:24.950 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.950 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.950 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.208 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:11:25.208 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:11:26.143 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.143 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:26.143 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:26.143 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.143 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:26.143 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:26.143 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:26.143 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:26.402 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:11:26.402 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:26.402 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:26.402 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:26.402 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:26.402 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.402 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key3 00:11:26.402 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:26.402 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.402 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:26.402 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:26.402 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:26.402 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:26.968 00:11:26.968 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:26.968 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.968 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:27.227 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.227 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.227 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:27.227 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.227 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:27.227 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:27.227 { 00:11:27.227 "cntlid": 87, 00:11:27.227 "qid": 0, 00:11:27.227 "state": "enabled", 00:11:27.227 "thread": "nvmf_tgt_poll_group_000", 00:11:27.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:27.227 "listen_address": { 00:11:27.227 "trtype": "TCP", 00:11:27.227 "adrfam": "IPv4", 00:11:27.227 "traddr": "10.0.0.3", 00:11:27.227 "trsvcid": "4420" 00:11:27.227 }, 00:11:27.227 "peer_address": { 00:11:27.227 "trtype": "TCP", 00:11:27.227 "adrfam": "IPv4", 00:11:27.227 "traddr": "10.0.0.1", 00:11:27.227 "trsvcid": "38418" 00:11:27.227 }, 00:11:27.227 "auth": { 00:11:27.227 "state": "completed", 00:11:27.227 "digest": "sha384", 00:11:27.227 "dhgroup": "ffdhe6144" 00:11:27.227 } 00:11:27.227 } 00:11:27.227 ]' 00:11:27.227 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:27.227 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:27.227 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.227 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:27.227 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.227 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.227 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.227 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.791 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:11:27.792 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:11:28.358 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.358 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:28.358 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:28.358 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.358 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:28.358 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:28.358 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:28.358 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:28.358 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:28.617 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:11:28.617 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:28.617 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:28.617 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:28.617 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:28.617 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.617 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:28.617 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:28.617 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.617 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:28.617 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:28.617 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:28.617 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.183 00:11:29.183 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:29.183 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.183 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:29.442 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.442 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.442 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:29.442 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.442 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:29.442 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.442 { 00:11:29.442 "cntlid": 89, 00:11:29.442 "qid": 0, 00:11:29.442 "state": "enabled", 00:11:29.442 "thread": "nvmf_tgt_poll_group_000", 00:11:29.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:29.442 "listen_address": { 00:11:29.442 "trtype": "TCP", 00:11:29.442 "adrfam": "IPv4", 00:11:29.442 "traddr": "10.0.0.3", 00:11:29.442 "trsvcid": "4420" 00:11:29.442 }, 00:11:29.442 "peer_address": { 00:11:29.442 "trtype": "TCP", 00:11:29.442 "adrfam": "IPv4", 00:11:29.442 "traddr": "10.0.0.1", 00:11:29.442 "trsvcid": "38430" 00:11:29.442 }, 00:11:29.442 "auth": { 00:11:29.442 "state": "completed", 00:11:29.442 "digest": "sha384", 00:11:29.442 "dhgroup": "ffdhe8192" 00:11:29.442 } 00:11:29.442 } 00:11:29.442 ]' 00:11:29.442 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.700 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:29.700 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.700 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:29.700 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.700 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.700 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.700 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.963 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:11:29.963 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:11:30.898 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.898 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:30.898 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:30.898 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.898 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:30.898 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:30.898 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:30.898 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:30.898 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:11:30.898 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.898 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:30.898 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:30.898 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:30.898 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.898 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:30.898 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:30.898 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.898 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:30.898 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:30.898 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:30.898 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.832 00:11:31.832 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:31.832 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.832 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:32.089 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.089 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.089 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:32.089 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.089 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:32.089 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:32.089 { 00:11:32.089 "cntlid": 91, 00:11:32.089 "qid": 0, 00:11:32.089 "state": "enabled", 00:11:32.089 "thread": "nvmf_tgt_poll_group_000", 00:11:32.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:32.089 "listen_address": { 00:11:32.089 "trtype": "TCP", 00:11:32.089 "adrfam": "IPv4", 00:11:32.089 "traddr": "10.0.0.3", 00:11:32.089 "trsvcid": "4420" 00:11:32.090 }, 00:11:32.090 "peer_address": { 00:11:32.090 "trtype": "TCP", 00:11:32.090 "adrfam": "IPv4", 00:11:32.090 "traddr": "10.0.0.1", 00:11:32.090 "trsvcid": "38458" 00:11:32.090 }, 00:11:32.090 "auth": { 00:11:32.090 "state": "completed", 00:11:32.090 "digest": "sha384", 00:11:32.090 "dhgroup": "ffdhe8192" 00:11:32.090 } 00:11:32.090 } 00:11:32.090 ]' 00:11:32.090 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:32.090 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:32.090 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:32.090 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:32.090 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:32.090 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.090 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.090 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.348 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:11:32.348 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:11:33.282 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.282 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:33.282 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:33.282 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.282 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:33.282 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:33.282 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:33.282 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:33.282 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:11:33.282 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:33.282 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:33.282 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:33.282 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:33.282 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.282 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.282 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:33.282 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.282 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:33.282 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.282 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.282 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.215 00:11:34.215 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:34.215 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.215 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:34.473 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.473 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.473 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:34.473 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.473 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:34.473 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:34.473 { 00:11:34.473 "cntlid": 93, 00:11:34.473 "qid": 0, 00:11:34.473 "state": "enabled", 00:11:34.473 "thread": "nvmf_tgt_poll_group_000", 00:11:34.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:34.473 "listen_address": { 00:11:34.473 "trtype": "TCP", 00:11:34.473 "adrfam": "IPv4", 00:11:34.473 "traddr": "10.0.0.3", 00:11:34.473 "trsvcid": "4420" 00:11:34.473 }, 00:11:34.473 "peer_address": { 00:11:34.473 "trtype": "TCP", 00:11:34.473 "adrfam": "IPv4", 00:11:34.473 "traddr": "10.0.0.1", 00:11:34.473 "trsvcid": "44498" 00:11:34.473 }, 00:11:34.473 "auth": { 00:11:34.473 "state": "completed", 00:11:34.473 "digest": "sha384", 00:11:34.473 "dhgroup": "ffdhe8192" 00:11:34.473 } 00:11:34.473 } 00:11:34.473 ]' 00:11:34.473 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:34.473 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:34.473 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:34.473 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:34.473 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:34.473 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.474 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.474 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.732 08:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:11:34.732 08:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:11:35.667 08:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.667 08:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:35.667 08:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:35.667 08:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.667 08:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:35.667 08:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:35.667 08:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:35.667 08:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:35.667 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:11:35.667 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:35.667 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:35.667 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:35.667 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:35.667 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.667 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key3 00:11:35.667 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:35.667 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.667 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:35.667 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:35.667 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:35.668 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:36.601 00:11:36.601 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:36.601 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:36.601 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.601 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.601 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.601 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:36.601 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.601 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:36.601 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:36.601 { 00:11:36.601 "cntlid": 95, 00:11:36.601 "qid": 0, 00:11:36.601 "state": "enabled", 00:11:36.601 "thread": "nvmf_tgt_poll_group_000", 00:11:36.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:36.601 "listen_address": { 00:11:36.601 "trtype": "TCP", 00:11:36.601 "adrfam": "IPv4", 00:11:36.601 "traddr": "10.0.0.3", 00:11:36.601 "trsvcid": "4420" 00:11:36.601 }, 00:11:36.602 "peer_address": { 00:11:36.602 "trtype": "TCP", 00:11:36.602 "adrfam": "IPv4", 00:11:36.602 "traddr": "10.0.0.1", 00:11:36.602 "trsvcid": "44522" 00:11:36.602 }, 00:11:36.602 "auth": { 00:11:36.602 "state": "completed", 00:11:36.602 "digest": "sha384", 00:11:36.602 "dhgroup": "ffdhe8192" 00:11:36.602 } 00:11:36.602 } 00:11:36.602 ]' 00:11:36.602 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:36.859 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:36.859 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:36.859 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:36.859 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:36.859 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.859 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.859 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.117 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:11:37.117 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:11:37.683 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.683 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:37.683 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:37.683 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.940 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:37.940 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:37.940 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:37.941 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:37.941 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:37.941 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:38.199 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:11:38.199 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:38.199 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:38.199 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:38.199 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:38.199 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.199 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.199 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:38.199 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.199 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:38.199 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.199 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.199 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.457 00:11:38.457 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:38.457 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.457 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.718 08:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.718 08:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.718 08:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:38.718 08:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.718 08:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:38.718 08:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.718 { 00:11:38.718 "cntlid": 97, 00:11:38.718 "qid": 0, 00:11:38.718 "state": "enabled", 00:11:38.718 "thread": "nvmf_tgt_poll_group_000", 00:11:38.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:38.718 "listen_address": { 00:11:38.718 "trtype": "TCP", 00:11:38.718 "adrfam": "IPv4", 00:11:38.718 "traddr": "10.0.0.3", 00:11:38.718 "trsvcid": "4420" 00:11:38.718 }, 00:11:38.718 "peer_address": { 00:11:38.718 "trtype": "TCP", 00:11:38.718 "adrfam": "IPv4", 00:11:38.718 "traddr": "10.0.0.1", 00:11:38.718 "trsvcid": "44548" 00:11:38.718 }, 00:11:38.718 "auth": { 00:11:38.718 "state": "completed", 00:11:38.718 "digest": "sha512", 00:11:38.718 "dhgroup": "null" 00:11:38.718 } 00:11:38.718 } 00:11:38.718 ]' 00:11:38.718 08:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.718 08:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:38.718 08:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.977 08:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:38.977 08:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.977 08:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.977 08:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.977 08:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.235 08:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:11:39.235 08:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:11:40.170 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.170 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:40.170 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:40.170 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.170 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:40.170 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:40.170 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:40.170 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:40.428 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:11:40.428 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:40.428 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:40.428 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:40.428 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:40.428 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.428 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.428 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:40.428 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.428 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:40.428 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.428 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.428 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.691 00:11:40.691 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:40.691 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.691 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.977 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.977 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.977 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:40.977 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.977 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:40.977 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.977 { 00:11:40.977 "cntlid": 99, 00:11:40.977 "qid": 0, 00:11:40.977 "state": "enabled", 00:11:40.977 "thread": "nvmf_tgt_poll_group_000", 00:11:40.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:40.977 "listen_address": { 00:11:40.977 "trtype": "TCP", 00:11:40.977 "adrfam": "IPv4", 00:11:40.977 "traddr": "10.0.0.3", 00:11:40.977 "trsvcid": "4420" 00:11:40.977 }, 00:11:40.977 "peer_address": { 00:11:40.977 "trtype": "TCP", 00:11:40.977 "adrfam": "IPv4", 00:11:40.977 "traddr": "10.0.0.1", 00:11:40.977 "trsvcid": "44588" 00:11:40.977 }, 00:11:40.977 "auth": { 00:11:40.977 "state": "completed", 00:11:40.977 "digest": "sha512", 00:11:40.977 "dhgroup": "null" 00:11:40.977 } 00:11:40.977 } 00:11:40.977 ]' 00:11:40.977 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.977 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:40.977 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:41.235 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:41.235 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:41.235 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.235 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.235 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.494 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:11:41.494 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:11:42.063 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.063 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:42.063 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:42.063 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.063 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:42.063 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:42.063 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:42.064 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:42.321 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:11:42.321 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:42.321 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:42.321 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:42.321 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:42.321 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.321 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.321 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:42.321 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.321 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:42.321 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.321 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.321 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.887 00:11:42.887 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:42.887 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:42.887 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.145 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.145 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.145 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:43.145 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.145 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:43.145 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:43.145 { 00:11:43.145 "cntlid": 101, 00:11:43.145 "qid": 0, 00:11:43.145 "state": "enabled", 00:11:43.145 "thread": "nvmf_tgt_poll_group_000", 00:11:43.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:43.145 "listen_address": { 00:11:43.145 "trtype": "TCP", 00:11:43.145 "adrfam": "IPv4", 00:11:43.145 "traddr": "10.0.0.3", 00:11:43.145 "trsvcid": "4420" 00:11:43.145 }, 00:11:43.145 "peer_address": { 00:11:43.145 "trtype": "TCP", 00:11:43.145 "adrfam": "IPv4", 00:11:43.145 "traddr": "10.0.0.1", 00:11:43.145 "trsvcid": "44616" 00:11:43.145 }, 00:11:43.145 "auth": { 00:11:43.145 "state": "completed", 00:11:43.145 "digest": "sha512", 00:11:43.145 "dhgroup": "null" 00:11:43.145 } 00:11:43.145 } 00:11:43.145 ]' 00:11:43.145 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:43.145 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:43.145 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:43.145 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:43.145 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:43.145 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.145 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.145 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.712 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:11:43.713 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:11:44.280 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.280 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:44.280 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:44.280 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.280 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:44.280 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:44.280 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:44.280 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:44.539 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:11:44.539 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:44.539 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:44.539 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:44.539 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:44.539 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.539 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key3 00:11:44.539 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:44.539 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.539 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:44.539 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:44.539 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:44.539 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:44.799 00:11:44.799 08:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.799 08:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.799 08:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:45.058 08:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.058 08:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.058 08:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:45.058 08:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.058 08:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:45.058 08:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:45.058 { 00:11:45.058 "cntlid": 103, 00:11:45.058 "qid": 0, 00:11:45.058 "state": "enabled", 00:11:45.058 "thread": "nvmf_tgt_poll_group_000", 00:11:45.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:45.058 "listen_address": { 00:11:45.058 "trtype": "TCP", 00:11:45.058 "adrfam": "IPv4", 00:11:45.058 "traddr": "10.0.0.3", 00:11:45.058 "trsvcid": "4420" 00:11:45.058 }, 00:11:45.058 "peer_address": { 00:11:45.058 "trtype": "TCP", 00:11:45.058 "adrfam": "IPv4", 00:11:45.058 "traddr": "10.0.0.1", 00:11:45.058 "trsvcid": "60696" 00:11:45.058 }, 00:11:45.058 "auth": { 00:11:45.058 "state": "completed", 00:11:45.058 "digest": "sha512", 00:11:45.058 "dhgroup": "null" 00:11:45.058 } 00:11:45.058 } 00:11:45.058 ]' 00:11:45.058 08:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:45.058 08:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:45.058 08:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:45.317 08:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:45.317 08:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:45.317 08:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.317 08:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.317 08:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.575 08:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:11:45.575 08:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:11:46.143 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.143 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:46.143 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:46.143 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.143 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:46.143 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:46.143 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:46.143 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:46.143 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:46.709 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:11:46.709 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:46.709 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:46.709 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:46.709 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:46.709 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.709 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.709 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:46.709 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.709 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:46.709 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.709 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.709 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.967 00:11:46.967 08:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.967 08:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.967 08:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:47.225 08:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.225 08:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.225 08:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:47.225 08:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.225 08:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:47.225 08:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:47.225 { 00:11:47.225 "cntlid": 105, 00:11:47.225 "qid": 0, 00:11:47.225 "state": "enabled", 00:11:47.225 "thread": "nvmf_tgt_poll_group_000", 00:11:47.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:47.225 "listen_address": { 00:11:47.225 "trtype": "TCP", 00:11:47.225 "adrfam": "IPv4", 00:11:47.225 "traddr": "10.0.0.3", 00:11:47.225 "trsvcid": "4420" 00:11:47.225 }, 00:11:47.226 "peer_address": { 00:11:47.226 "trtype": "TCP", 00:11:47.226 "adrfam": "IPv4", 00:11:47.226 "traddr": "10.0.0.1", 00:11:47.226 "trsvcid": "60720" 00:11:47.226 }, 00:11:47.226 "auth": { 00:11:47.226 "state": "completed", 00:11:47.226 "digest": "sha512", 00:11:47.226 "dhgroup": "ffdhe2048" 00:11:47.226 } 00:11:47.226 } 00:11:47.226 ]' 00:11:47.226 08:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:47.226 08:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:47.226 08:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:47.484 08:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:47.484 08:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:47.484 08:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.484 08:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.484 08:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.743 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:11:47.743 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:11:48.310 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.310 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:48.310 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:48.310 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.310 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:48.310 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:48.310 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:48.310 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:48.567 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:11:48.567 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:48.567 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:48.567 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:48.567 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:48.567 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.567 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.567 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:48.567 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.567 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:48.567 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.567 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.567 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.849 00:11:49.107 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:49.107 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:49.107 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.366 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.366 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.366 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:49.366 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.366 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:49.366 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:49.366 { 00:11:49.366 "cntlid": 107, 00:11:49.366 "qid": 0, 00:11:49.366 "state": "enabled", 00:11:49.366 "thread": "nvmf_tgt_poll_group_000", 00:11:49.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:49.366 "listen_address": { 00:11:49.366 "trtype": "TCP", 00:11:49.366 "adrfam": "IPv4", 00:11:49.366 "traddr": "10.0.0.3", 00:11:49.366 "trsvcid": "4420" 00:11:49.366 }, 00:11:49.366 "peer_address": { 00:11:49.366 "trtype": "TCP", 00:11:49.366 "adrfam": "IPv4", 00:11:49.366 "traddr": "10.0.0.1", 00:11:49.366 "trsvcid": "60738" 00:11:49.366 }, 00:11:49.366 "auth": { 00:11:49.366 "state": "completed", 00:11:49.366 "digest": "sha512", 00:11:49.366 "dhgroup": "ffdhe2048" 00:11:49.366 } 00:11:49.366 } 00:11:49.366 ]' 00:11:49.366 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:49.366 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:49.366 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:49.366 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:49.366 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:49.366 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.366 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.366 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.931 08:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:11:49.931 08:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:11:50.499 08:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.499 08:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:50.499 08:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:50.499 08:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.499 08:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:50.499 08:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:50.499 08:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:50.499 08:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:50.765 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:11:50.765 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:50.765 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:50.765 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:50.765 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:50.765 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.765 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.765 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:50.765 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.765 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:50.765 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.765 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.765 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:51.037 00:11:51.037 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:51.037 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:51.037 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.295 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.295 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.295 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:51.295 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.295 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:51.295 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:51.295 { 00:11:51.295 "cntlid": 109, 00:11:51.295 "qid": 0, 00:11:51.295 "state": "enabled", 00:11:51.295 "thread": "nvmf_tgt_poll_group_000", 00:11:51.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:51.295 "listen_address": { 00:11:51.295 "trtype": "TCP", 00:11:51.295 "adrfam": "IPv4", 00:11:51.295 "traddr": "10.0.0.3", 00:11:51.295 "trsvcid": "4420" 00:11:51.295 }, 00:11:51.295 "peer_address": { 00:11:51.295 "trtype": "TCP", 00:11:51.295 "adrfam": "IPv4", 00:11:51.295 "traddr": "10.0.0.1", 00:11:51.295 "trsvcid": "60764" 00:11:51.295 }, 00:11:51.295 "auth": { 00:11:51.295 "state": "completed", 00:11:51.295 "digest": "sha512", 00:11:51.295 "dhgroup": "ffdhe2048" 00:11:51.295 } 00:11:51.295 } 00:11:51.295 ]' 00:11:51.295 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:51.295 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:51.295 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:51.295 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:51.295 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:51.554 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.554 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.554 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.814 08:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:11:51.814 08:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:11:52.382 08:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.382 08:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:52.382 08:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:52.382 08:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.382 08:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:52.383 08:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:52.383 08:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:52.383 08:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:52.641 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:11:52.641 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.641 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:52.641 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:52.641 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:52.641 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.641 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key3 00:11:52.641 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:52.641 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.641 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:52.641 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:52.641 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:52.641 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:53.208 00:11:53.208 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:53.208 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:53.208 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.208 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.208 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.208 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:53.208 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.208 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:53.208 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:53.208 { 00:11:53.208 "cntlid": 111, 00:11:53.208 "qid": 0, 00:11:53.208 "state": "enabled", 00:11:53.208 "thread": "nvmf_tgt_poll_group_000", 00:11:53.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:53.208 "listen_address": { 00:11:53.208 "trtype": "TCP", 00:11:53.208 "adrfam": "IPv4", 00:11:53.208 "traddr": "10.0.0.3", 00:11:53.208 "trsvcid": "4420" 00:11:53.208 }, 00:11:53.208 "peer_address": { 00:11:53.208 "trtype": "TCP", 00:11:53.208 "adrfam": "IPv4", 00:11:53.208 "traddr": "10.0.0.1", 00:11:53.208 "trsvcid": "39776" 00:11:53.208 }, 00:11:53.208 "auth": { 00:11:53.208 "state": "completed", 00:11:53.208 "digest": "sha512", 00:11:53.208 "dhgroup": "ffdhe2048" 00:11:53.208 } 00:11:53.208 } 00:11:53.208 ]' 00:11:53.208 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:53.468 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:53.468 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:53.468 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:53.468 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:53.468 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.468 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.468 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.727 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:11:53.727 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:11:54.294 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.294 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:54.294 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:54.294 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.294 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:54.294 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:54.294 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:54.294 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:54.294 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:54.861 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:11:54.861 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:54.861 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:54.861 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:54.861 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:54.861 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.861 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.861 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:54.861 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.861 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:54.861 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.861 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.861 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.119 00:11:55.119 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:55.119 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:55.119 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.377 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.377 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.377 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:55.377 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.377 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:55.377 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:55.377 { 00:11:55.377 "cntlid": 113, 00:11:55.377 "qid": 0, 00:11:55.377 "state": "enabled", 00:11:55.377 "thread": "nvmf_tgt_poll_group_000", 00:11:55.377 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:55.377 "listen_address": { 00:11:55.377 "trtype": "TCP", 00:11:55.377 "adrfam": "IPv4", 00:11:55.377 "traddr": "10.0.0.3", 00:11:55.377 "trsvcid": "4420" 00:11:55.377 }, 00:11:55.377 "peer_address": { 00:11:55.377 "trtype": "TCP", 00:11:55.377 "adrfam": "IPv4", 00:11:55.377 "traddr": "10.0.0.1", 00:11:55.377 "trsvcid": "39812" 00:11:55.377 }, 00:11:55.377 "auth": { 00:11:55.377 "state": "completed", 00:11:55.377 "digest": "sha512", 00:11:55.377 "dhgroup": "ffdhe3072" 00:11:55.377 } 00:11:55.377 } 00:11:55.377 ]' 00:11:55.377 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:55.377 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:55.377 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:55.377 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:55.377 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:55.377 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.377 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.377 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.975 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:11:55.975 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:11:56.562 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.562 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:56.562 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:56.562 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.562 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:56.563 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:56.563 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:56.563 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:56.821 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:11:56.821 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:56.821 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:56.821 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:56.821 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:56.821 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.821 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.821 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:56.821 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.821 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:56.821 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.821 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.821 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.079 00:11:57.338 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:57.338 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.338 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.596 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.596 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.596 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:57.596 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.596 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:57.596 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:57.596 { 00:11:57.596 "cntlid": 115, 00:11:57.596 "qid": 0, 00:11:57.596 "state": "enabled", 00:11:57.596 "thread": "nvmf_tgt_poll_group_000", 00:11:57.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:57.596 "listen_address": { 00:11:57.596 "trtype": "TCP", 00:11:57.596 "adrfam": "IPv4", 00:11:57.596 "traddr": "10.0.0.3", 00:11:57.596 "trsvcid": "4420" 00:11:57.596 }, 00:11:57.596 "peer_address": { 00:11:57.596 "trtype": "TCP", 00:11:57.596 "adrfam": "IPv4", 00:11:57.596 "traddr": "10.0.0.1", 00:11:57.596 "trsvcid": "39834" 00:11:57.596 }, 00:11:57.596 "auth": { 00:11:57.596 "state": "completed", 00:11:57.596 "digest": "sha512", 00:11:57.596 "dhgroup": "ffdhe3072" 00:11:57.596 } 00:11:57.596 } 00:11:57.596 ]' 00:11:57.596 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.597 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:57.597 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.597 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:57.597 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.597 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.597 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.597 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.855 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:11:57.855 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:11:58.791 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.791 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:11:58.791 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:58.791 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.791 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:58.791 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:58.791 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:58.791 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:59.050 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:11:59.050 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:59.050 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:59.050 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:59.050 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:59.050 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.050 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.050 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:59.050 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.050 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:59.050 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.050 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.050 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.308 00:11:59.308 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:59.308 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.308 08:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:59.566 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.566 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.566 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:11:59.566 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.566 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:11:59.566 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:59.566 { 00:11:59.566 "cntlid": 117, 00:11:59.566 "qid": 0, 00:11:59.566 "state": "enabled", 00:11:59.566 "thread": "nvmf_tgt_poll_group_000", 00:11:59.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:11:59.566 "listen_address": { 00:11:59.566 "trtype": "TCP", 00:11:59.566 "adrfam": "IPv4", 00:11:59.566 "traddr": "10.0.0.3", 00:11:59.566 "trsvcid": "4420" 00:11:59.566 }, 00:11:59.566 "peer_address": { 00:11:59.566 "trtype": "TCP", 00:11:59.566 "adrfam": "IPv4", 00:11:59.566 "traddr": "10.0.0.1", 00:11:59.566 "trsvcid": "39862" 00:11:59.566 }, 00:11:59.566 "auth": { 00:11:59.566 "state": "completed", 00:11:59.566 "digest": "sha512", 00:11:59.566 "dhgroup": "ffdhe3072" 00:11:59.566 } 00:11:59.566 } 00:11:59.566 ]' 00:11:59.566 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:59.825 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:59.825 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:59.825 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:59.825 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:59.825 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.825 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.825 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.083 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:12:00.084 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:12:00.651 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.651 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:12:00.651 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:00.651 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.908 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:00.908 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:00.908 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:00.908 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:01.164 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:12:01.164 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:01.164 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:01.164 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:01.164 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:01.164 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.164 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key3 00:12:01.164 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:01.164 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.164 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:01.164 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:01.164 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:01.164 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:01.422 00:12:01.422 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:01.422 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:01.422 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.988 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.988 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.989 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:01.989 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.989 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:01.989 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:01.989 { 00:12:01.989 "cntlid": 119, 00:12:01.989 "qid": 0, 00:12:01.989 "state": "enabled", 00:12:01.989 "thread": "nvmf_tgt_poll_group_000", 00:12:01.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:12:01.989 "listen_address": { 00:12:01.989 "trtype": "TCP", 00:12:01.989 "adrfam": "IPv4", 00:12:01.989 "traddr": "10.0.0.3", 00:12:01.989 "trsvcid": "4420" 00:12:01.989 }, 00:12:01.989 "peer_address": { 00:12:01.989 "trtype": "TCP", 00:12:01.989 "adrfam": "IPv4", 00:12:01.989 "traddr": "10.0.0.1", 00:12:01.989 "trsvcid": "39896" 00:12:01.989 }, 00:12:01.989 "auth": { 00:12:01.989 "state": "completed", 00:12:01.989 "digest": "sha512", 00:12:01.989 "dhgroup": "ffdhe3072" 00:12:01.989 } 00:12:01.989 } 00:12:01.989 ]' 00:12:01.989 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:01.989 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:01.989 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:01.989 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:01.989 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:01.989 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.989 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.989 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.260 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:12:02.261 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:12:02.849 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.849 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:12:02.849 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:02.849 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.849 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:02.849 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:02.849 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:02.849 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:02.849 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:03.415 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:12:03.416 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:03.416 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:03.416 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:03.416 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:03.416 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.416 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.416 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:03.416 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.416 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:03.416 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.416 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.416 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.674 00:12:03.674 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:03.674 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.674 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:03.931 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.931 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.931 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:03.931 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.931 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:03.931 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:03.931 { 00:12:03.931 "cntlid": 121, 00:12:03.931 "qid": 0, 00:12:03.931 "state": "enabled", 00:12:03.931 "thread": "nvmf_tgt_poll_group_000", 00:12:03.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:12:03.931 "listen_address": { 00:12:03.931 "trtype": "TCP", 00:12:03.931 "adrfam": "IPv4", 00:12:03.931 "traddr": "10.0.0.3", 00:12:03.931 "trsvcid": "4420" 00:12:03.931 }, 00:12:03.931 "peer_address": { 00:12:03.931 "trtype": "TCP", 00:12:03.931 "adrfam": "IPv4", 00:12:03.931 "traddr": "10.0.0.1", 00:12:03.931 "trsvcid": "47148" 00:12:03.931 }, 00:12:03.931 "auth": { 00:12:03.931 "state": "completed", 00:12:03.931 "digest": "sha512", 00:12:03.931 "dhgroup": "ffdhe4096" 00:12:03.931 } 00:12:03.931 } 00:12:03.931 ]' 00:12:03.931 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:03.931 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:03.931 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:03.931 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:03.931 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:04.188 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.188 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.188 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.447 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:12:04.447 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:12:05.013 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.013 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:12:05.013 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:05.013 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.013 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:05.013 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.013 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:05.013 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:05.272 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:12:05.272 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.272 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:05.272 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:05.272 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:05.272 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.272 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.272 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:05.272 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.272 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:05.272 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.272 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.272 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.838 00:12:05.838 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:05.838 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.838 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:06.096 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.097 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.097 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:06.097 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.097 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:06.097 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.097 { 00:12:06.097 "cntlid": 123, 00:12:06.097 "qid": 0, 00:12:06.097 "state": "enabled", 00:12:06.097 "thread": "nvmf_tgt_poll_group_000", 00:12:06.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:12:06.097 "listen_address": { 00:12:06.097 "trtype": "TCP", 00:12:06.097 "adrfam": "IPv4", 00:12:06.097 "traddr": "10.0.0.3", 00:12:06.097 "trsvcid": "4420" 00:12:06.097 }, 00:12:06.097 "peer_address": { 00:12:06.097 "trtype": "TCP", 00:12:06.097 "adrfam": "IPv4", 00:12:06.097 "traddr": "10.0.0.1", 00:12:06.097 "trsvcid": "47186" 00:12:06.097 }, 00:12:06.097 "auth": { 00:12:06.097 "state": "completed", 00:12:06.097 "digest": "sha512", 00:12:06.097 "dhgroup": "ffdhe4096" 00:12:06.097 } 00:12:06.097 } 00:12:06.097 ]' 00:12:06.097 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.097 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:06.097 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.097 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:06.097 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:06.097 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.097 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.097 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.664 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:12:06.664 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:12:07.231 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.231 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:12:07.231 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:07.231 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.231 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:07.231 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:07.232 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:07.232 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:07.490 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:12:07.490 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:07.490 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:07.490 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:07.490 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:07.490 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.490 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.490 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:07.490 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.490 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:07.490 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.490 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.490 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.058 00:12:08.058 08:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.058 08:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.058 08:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.316 08:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.316 08:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.316 08:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:08.316 08:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.316 08:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:08.316 08:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:08.316 { 00:12:08.316 "cntlid": 125, 00:12:08.316 "qid": 0, 00:12:08.316 "state": "enabled", 00:12:08.316 "thread": "nvmf_tgt_poll_group_000", 00:12:08.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:12:08.316 "listen_address": { 00:12:08.316 "trtype": "TCP", 00:12:08.316 "adrfam": "IPv4", 00:12:08.316 "traddr": "10.0.0.3", 00:12:08.316 "trsvcid": "4420" 00:12:08.316 }, 00:12:08.316 "peer_address": { 00:12:08.316 "trtype": "TCP", 00:12:08.316 "adrfam": "IPv4", 00:12:08.316 "traddr": "10.0.0.1", 00:12:08.316 "trsvcid": "47204" 00:12:08.316 }, 00:12:08.316 "auth": { 00:12:08.316 "state": "completed", 00:12:08.316 "digest": "sha512", 00:12:08.316 "dhgroup": "ffdhe4096" 00:12:08.316 } 00:12:08.316 } 00:12:08.316 ]' 00:12:08.316 08:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.316 08:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:08.316 08:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.316 08:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:08.316 08:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.574 08:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.574 08:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.574 08:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.833 08:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:12:08.833 08:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:12:09.399 08:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.399 08:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:12:09.399 08:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:09.400 08:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.400 08:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:09.400 08:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.400 08:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:09.400 08:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:09.658 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:12:09.658 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.658 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:09.658 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:09.658 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:09.658 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.658 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key3 00:12:09.658 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:09.658 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.658 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:09.658 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:09.658 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:09.658 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:09.916 00:12:09.916 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:09.916 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.916 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.483 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.483 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.483 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:10.483 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.483 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:10.483 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.483 { 00:12:10.483 "cntlid": 127, 00:12:10.483 "qid": 0, 00:12:10.483 "state": "enabled", 00:12:10.483 "thread": "nvmf_tgt_poll_group_000", 00:12:10.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:12:10.483 "listen_address": { 00:12:10.483 "trtype": "TCP", 00:12:10.483 "adrfam": "IPv4", 00:12:10.483 "traddr": "10.0.0.3", 00:12:10.483 "trsvcid": "4420" 00:12:10.483 }, 00:12:10.483 "peer_address": { 00:12:10.483 "trtype": "TCP", 00:12:10.483 "adrfam": "IPv4", 00:12:10.483 "traddr": "10.0.0.1", 00:12:10.483 "trsvcid": "47214" 00:12:10.483 }, 00:12:10.483 "auth": { 00:12:10.483 "state": "completed", 00:12:10.483 "digest": "sha512", 00:12:10.483 "dhgroup": "ffdhe4096" 00:12:10.483 } 00:12:10.483 } 00:12:10.483 ]' 00:12:10.483 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.483 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:10.483 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.483 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:10.483 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.483 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.483 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.483 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.740 08:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:12:10.740 08:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:12:11.305 08:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.563 08:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:12:11.563 08:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:11.563 08:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.563 08:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:11.563 08:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:11.563 08:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.563 08:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:11.563 08:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:11.821 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:12:11.821 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.821 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:11.821 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:11.821 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:11.821 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.821 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.821 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:11.821 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.821 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:11.821 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.821 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.821 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.078 00:12:12.078 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.078 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.078 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.644 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.644 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.644 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:12.644 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.644 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:12.644 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.644 { 00:12:12.644 "cntlid": 129, 00:12:12.644 "qid": 0, 00:12:12.644 "state": "enabled", 00:12:12.644 "thread": "nvmf_tgt_poll_group_000", 00:12:12.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:12:12.644 "listen_address": { 00:12:12.644 "trtype": "TCP", 00:12:12.644 "adrfam": "IPv4", 00:12:12.644 "traddr": "10.0.0.3", 00:12:12.644 "trsvcid": "4420" 00:12:12.644 }, 00:12:12.644 "peer_address": { 00:12:12.644 "trtype": "TCP", 00:12:12.644 "adrfam": "IPv4", 00:12:12.644 "traddr": "10.0.0.1", 00:12:12.644 "trsvcid": "47252" 00:12:12.644 }, 00:12:12.644 "auth": { 00:12:12.644 "state": "completed", 00:12:12.644 "digest": "sha512", 00:12:12.644 "dhgroup": "ffdhe6144" 00:12:12.644 } 00:12:12.644 } 00:12:12.644 ]' 00:12:12.644 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.644 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:12.644 08:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.644 08:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:12.644 08:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.644 08:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.644 08:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.644 08:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.903 08:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:12:12.903 08:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:12:13.470 08:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.470 08:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:12:13.470 08:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:13.470 08:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.470 08:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:13.470 08:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.470 08:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:13.470 08:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:14.036 08:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:12:14.036 08:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:14.036 08:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:14.036 08:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:14.036 08:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:14.036 08:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.036 08:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.036 08:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:14.036 08:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.036 08:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:14.036 08:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.037 08:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.037 08:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.295 00:12:14.295 08:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.295 08:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.295 08:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.554 08:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.554 08:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.554 08:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:14.554 08:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.554 08:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:14.554 08:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.554 { 00:12:14.554 "cntlid": 131, 00:12:14.554 "qid": 0, 00:12:14.554 "state": "enabled", 00:12:14.554 "thread": "nvmf_tgt_poll_group_000", 00:12:14.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:12:14.554 "listen_address": { 00:12:14.554 "trtype": "TCP", 00:12:14.554 "adrfam": "IPv4", 00:12:14.554 "traddr": "10.0.0.3", 00:12:14.554 "trsvcid": "4420" 00:12:14.554 }, 00:12:14.554 "peer_address": { 00:12:14.554 "trtype": "TCP", 00:12:14.554 "adrfam": "IPv4", 00:12:14.554 "traddr": "10.0.0.1", 00:12:14.554 "trsvcid": "58114" 00:12:14.554 }, 00:12:14.554 "auth": { 00:12:14.554 "state": "completed", 00:12:14.554 "digest": "sha512", 00:12:14.554 "dhgroup": "ffdhe6144" 00:12:14.554 } 00:12:14.554 } 00:12:14.554 ]' 00:12:14.554 08:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.813 08:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:14.813 08:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.813 08:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:14.813 08:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.813 08:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.813 08:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.813 08:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.071 08:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:12:15.071 08:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:12:15.649 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.649 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:12:15.649 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:15.649 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.908 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:15.908 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.908 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:15.908 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:16.166 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:12:16.166 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:16.166 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:16.166 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:16.166 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:16.166 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.166 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.166 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:16.166 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.166 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:16.166 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.166 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.166 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.426 00:12:16.426 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.426 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.426 08:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.684 08:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.684 08:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.684 08:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:16.684 08:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.943 08:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:16.943 08:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.943 { 00:12:16.943 "cntlid": 133, 00:12:16.943 "qid": 0, 00:12:16.943 "state": "enabled", 00:12:16.943 "thread": "nvmf_tgt_poll_group_000", 00:12:16.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:12:16.943 "listen_address": { 00:12:16.943 "trtype": "TCP", 00:12:16.943 "adrfam": "IPv4", 00:12:16.943 "traddr": "10.0.0.3", 00:12:16.943 "trsvcid": "4420" 00:12:16.943 }, 00:12:16.943 "peer_address": { 00:12:16.943 "trtype": "TCP", 00:12:16.943 "adrfam": "IPv4", 00:12:16.943 "traddr": "10.0.0.1", 00:12:16.943 "trsvcid": "58134" 00:12:16.943 }, 00:12:16.943 "auth": { 00:12:16.943 "state": "completed", 00:12:16.943 "digest": "sha512", 00:12:16.943 "dhgroup": "ffdhe6144" 00:12:16.943 } 00:12:16.943 } 00:12:16.943 ]' 00:12:16.943 08:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.943 08:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:16.943 08:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.943 08:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:16.943 08:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:16.943 08:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.943 08:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.943 08:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.201 08:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:12:17.201 08:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:12:17.768 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.768 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:12:17.768 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:17.768 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.768 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:17.768 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.768 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:17.768 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:18.334 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:12:18.334 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:18.334 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:18.334 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:18.334 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:18.334 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.335 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key3 00:12:18.335 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:18.335 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.335 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:18.335 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:18.335 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:18.335 08:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:18.593 00:12:18.851 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.851 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.851 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.110 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.110 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.110 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:19.110 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.110 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:19.110 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:19.110 { 00:12:19.110 "cntlid": 135, 00:12:19.110 "qid": 0, 00:12:19.110 "state": "enabled", 00:12:19.110 "thread": "nvmf_tgt_poll_group_000", 00:12:19.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:12:19.110 "listen_address": { 00:12:19.110 "trtype": "TCP", 00:12:19.110 "adrfam": "IPv4", 00:12:19.110 "traddr": "10.0.0.3", 00:12:19.110 "trsvcid": "4420" 00:12:19.110 }, 00:12:19.110 "peer_address": { 00:12:19.110 "trtype": "TCP", 00:12:19.110 "adrfam": "IPv4", 00:12:19.110 "traddr": "10.0.0.1", 00:12:19.110 "trsvcid": "58150" 00:12:19.110 }, 00:12:19.110 "auth": { 00:12:19.110 "state": "completed", 00:12:19.110 "digest": "sha512", 00:12:19.110 "dhgroup": "ffdhe6144" 00:12:19.110 } 00:12:19.110 } 00:12:19.110 ]' 00:12:19.110 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:19.110 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:19.110 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:19.110 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:19.110 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:19.368 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.368 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.368 08:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.627 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:12:19.627 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:12:20.193 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.193 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:12:20.193 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:20.193 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.193 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:20.193 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:20.193 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.193 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:20.193 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:20.451 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:12:20.451 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.451 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:20.451 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:20.451 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:20.451 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.451 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.451 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:20.451 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.451 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:20.451 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.451 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.451 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.396 00:12:21.396 08:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:21.396 08:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.396 08:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.396 08:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.396 08:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.396 08:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:21.396 08:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.396 08:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:21.396 08:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:21.396 { 00:12:21.396 "cntlid": 137, 00:12:21.396 "qid": 0, 00:12:21.396 "state": "enabled", 00:12:21.396 "thread": "nvmf_tgt_poll_group_000", 00:12:21.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:12:21.396 "listen_address": { 00:12:21.396 "trtype": "TCP", 00:12:21.396 "adrfam": "IPv4", 00:12:21.396 "traddr": "10.0.0.3", 00:12:21.396 "trsvcid": "4420" 00:12:21.396 }, 00:12:21.396 "peer_address": { 00:12:21.396 "trtype": "TCP", 00:12:21.396 "adrfam": "IPv4", 00:12:21.396 "traddr": "10.0.0.1", 00:12:21.396 "trsvcid": "58174" 00:12:21.396 }, 00:12:21.396 "auth": { 00:12:21.396 "state": "completed", 00:12:21.396 "digest": "sha512", 00:12:21.396 "dhgroup": "ffdhe8192" 00:12:21.396 } 00:12:21.396 } 00:12:21.396 ]' 00:12:21.396 08:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.671 08:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:21.671 08:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.671 08:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:21.671 08:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.671 08:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.671 08:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.671 08:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.930 08:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:12:21.930 08:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:12:22.864 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.864 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:12:22.864 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:22.864 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.864 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:22.864 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.864 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:22.864 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:23.122 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:12:23.122 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:23.122 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:23.122 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:23.122 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:23.122 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.122 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.122 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:23.122 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.122 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:23.122 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.122 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.122 08:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.689 00:12:23.689 08:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.689 08:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.689 08:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.948 08:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.948 08:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.948 08:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:23.948 08:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.948 08:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:23.948 08:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.948 { 00:12:23.948 "cntlid": 139, 00:12:23.948 "qid": 0, 00:12:23.948 "state": "enabled", 00:12:23.948 "thread": "nvmf_tgt_poll_group_000", 00:12:23.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:12:23.948 "listen_address": { 00:12:23.948 "trtype": "TCP", 00:12:23.948 "adrfam": "IPv4", 00:12:23.948 "traddr": "10.0.0.3", 00:12:23.948 "trsvcid": "4420" 00:12:23.948 }, 00:12:23.948 "peer_address": { 00:12:23.948 "trtype": "TCP", 00:12:23.948 "adrfam": "IPv4", 00:12:23.948 "traddr": "10.0.0.1", 00:12:23.948 "trsvcid": "33880" 00:12:23.948 }, 00:12:23.948 "auth": { 00:12:23.948 "state": "completed", 00:12:23.948 "digest": "sha512", 00:12:23.948 "dhgroup": "ffdhe8192" 00:12:23.948 } 00:12:23.948 } 00:12:23.948 ]' 00:12:23.948 08:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.948 08:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:23.948 08:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.948 08:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:23.948 08:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:24.206 08:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.206 08:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.206 08:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.464 08:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:12:24.464 08:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: --dhchap-ctrl-secret DHHC-1:02:ZjdmZDgzOTQ0OGM5MWZlNWMwOWJhOTYwMjc1Y2I0NjI0ZGJmYjhiYjM3MjkwYTZhR8ZOSw==: 00:12:25.030 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.030 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:12:25.030 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:25.030 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.030 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:25.030 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:25.030 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:25.030 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:25.288 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:12:25.288 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:25.288 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:25.288 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:25.288 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:25.288 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.288 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.288 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:25.288 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.288 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:25.288 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.288 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.288 08:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.222 00:12:26.222 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:26.222 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:26.222 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.481 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.481 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.481 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:26.481 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.481 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:26.481 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:26.481 { 00:12:26.481 "cntlid": 141, 00:12:26.481 "qid": 0, 00:12:26.481 "state": "enabled", 00:12:26.481 "thread": "nvmf_tgt_poll_group_000", 00:12:26.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:12:26.481 "listen_address": { 00:12:26.481 "trtype": "TCP", 00:12:26.481 "adrfam": "IPv4", 00:12:26.481 "traddr": "10.0.0.3", 00:12:26.481 "trsvcid": "4420" 00:12:26.481 }, 00:12:26.481 "peer_address": { 00:12:26.481 "trtype": "TCP", 00:12:26.481 "adrfam": "IPv4", 00:12:26.481 "traddr": "10.0.0.1", 00:12:26.481 "trsvcid": "33910" 00:12:26.481 }, 00:12:26.481 "auth": { 00:12:26.481 "state": "completed", 00:12:26.481 "digest": "sha512", 00:12:26.481 "dhgroup": "ffdhe8192" 00:12:26.481 } 00:12:26.481 } 00:12:26.481 ]' 00:12:26.481 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:26.481 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:26.481 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:26.481 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:26.481 08:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:26.481 08:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.481 08:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.481 08:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.047 08:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:12:27.047 08:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:01:NDQ4YzA4ODIyYWJjMWNhYzU3ZjA3ZGFjOGRmYzVlYWTWS9YZ: 00:12:27.689 08:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.689 08:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:12:27.689 08:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:27.689 08:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.689 08:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:27.689 08:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:27.689 08:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:27.689 08:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:27.967 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:12:27.967 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.967 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:27.967 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:27.967 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:27.967 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.967 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key3 00:12:27.967 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:27.967 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.967 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:27.967 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:27.967 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:27.967 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:28.534 00:12:28.534 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.534 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.534 08:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.792 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.792 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.792 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:28.792 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.792 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:28.792 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.792 { 00:12:28.792 "cntlid": 143, 00:12:28.792 "qid": 0, 00:12:28.792 "state": "enabled", 00:12:28.792 "thread": "nvmf_tgt_poll_group_000", 00:12:28.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:12:28.792 "listen_address": { 00:12:28.792 "trtype": "TCP", 00:12:28.792 "adrfam": "IPv4", 00:12:28.792 "traddr": "10.0.0.3", 00:12:28.792 "trsvcid": "4420" 00:12:28.792 }, 00:12:28.792 "peer_address": { 00:12:28.792 "trtype": "TCP", 00:12:28.792 "adrfam": "IPv4", 00:12:28.792 "traddr": "10.0.0.1", 00:12:28.792 "trsvcid": "33938" 00:12:28.792 }, 00:12:28.792 "auth": { 00:12:28.792 "state": "completed", 00:12:28.792 "digest": "sha512", 00:12:28.792 "dhgroup": "ffdhe8192" 00:12:28.792 } 00:12:28.792 } 00:12:28.792 ]' 00:12:28.792 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:28.792 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:28.792 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:28.792 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:29.051 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.051 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.051 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.051 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.310 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:12:29.310 08:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:12:29.877 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.877 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:12:29.877 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:29.877 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.877 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:29.877 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:29.877 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:12:29.877 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:29.877 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:29.877 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:29.877 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:30.136 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:12:30.136 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.136 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:30.136 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:30.136 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:30.136 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.136 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.136 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:30.136 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.136 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:30.136 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.136 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.136 08:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.070 00:12:31.070 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:31.070 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:31.070 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.070 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.070 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.070 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:31.070 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.070 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:31.070 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:31.070 { 00:12:31.070 "cntlid": 145, 00:12:31.070 "qid": 0, 00:12:31.070 "state": "enabled", 00:12:31.070 "thread": "nvmf_tgt_poll_group_000", 00:12:31.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:12:31.070 "listen_address": { 00:12:31.070 "trtype": "TCP", 00:12:31.070 "adrfam": "IPv4", 00:12:31.070 "traddr": "10.0.0.3", 00:12:31.070 "trsvcid": "4420" 00:12:31.070 }, 00:12:31.070 "peer_address": { 00:12:31.070 "trtype": "TCP", 00:12:31.070 "adrfam": "IPv4", 00:12:31.070 "traddr": "10.0.0.1", 00:12:31.070 "trsvcid": "33948" 00:12:31.070 }, 00:12:31.070 "auth": { 00:12:31.070 "state": "completed", 00:12:31.070 "digest": "sha512", 00:12:31.070 "dhgroup": "ffdhe8192" 00:12:31.070 } 00:12:31.070 } 00:12:31.070 ]' 00:12:31.070 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:31.329 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:31.329 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.329 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:31.329 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.329 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.329 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.329 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.587 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:12:31.587 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:00:NTJkNTUwZjY3NmU3OWNkMzIzYWIwOGExMmI2MzUzNzBiNDFiNzU0MGRkZjcyMzhmhVJL6A==: --dhchap-ctrl-secret DHHC-1:03:ZGU4NjdkNmYwM2I3ZTBhN2Y2ZDY2YWU3OWMyNGJlMGM1MjA4NWIyNTRmNjNjMjJlN2NhNGNhNmI2NzM0MWU2Nghg1YI=: 00:12:32.169 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.169 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:12:32.169 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:32.169 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.169 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:32.169 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 00:12:32.169 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:32.169 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.169 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:32.169 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:12:32.169 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # local es=0 00:12:32.169 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@657 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:12:32.169 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # local arg=bdev_connect 00:12:32.169 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:12:32.169 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # type -t bdev_connect 00:12:32.169 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:12:32.169 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@658 -- # bdev_connect -b nvme0 --dhchap-key key2 00:12:32.169 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:32.169 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:32.737 request: 00:12:32.737 { 00:12:32.737 "name": "nvme0", 00:12:32.737 "trtype": "tcp", 00:12:32.737 "traddr": "10.0.0.3", 00:12:32.737 "adrfam": "ipv4", 00:12:32.737 "trsvcid": "4420", 00:12:32.737 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:32.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:12:32.737 "prchk_reftag": false, 00:12:32.737 "prchk_guard": false, 00:12:32.737 "hdgst": false, 00:12:32.737 "ddgst": false, 00:12:32.737 "dhchap_key": "key2", 00:12:32.737 "allow_unrecognized_csi": false, 00:12:32.737 "method": "bdev_nvme_attach_controller", 00:12:32.737 "req_id": 1 00:12:32.737 } 00:12:32.737 Got JSON-RPC error response 00:12:32.737 response: 00:12:32.737 { 00:12:32.737 "code": -5, 00:12:32.737 "message": "Input/output error" 00:12:32.737 } 00:12:32.737 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@658 -- # es=1 00:12:32.737 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:12:32.737 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:12:32.737 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:12:32.737 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:12:32.737 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:32.737 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.737 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:32.737 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.737 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:32.737 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.737 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:32.737 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:32.738 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # local es=0 00:12:32.738 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@657 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:32.738 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # local arg=bdev_connect 00:12:32.738 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:12:32.738 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # type -t bdev_connect 00:12:32.996 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:12:32.996 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@658 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:32.996 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:32.996 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:33.565 request: 00:12:33.565 { 00:12:33.565 "name": "nvme0", 00:12:33.565 "trtype": "tcp", 00:12:33.565 "traddr": "10.0.0.3", 00:12:33.565 "adrfam": "ipv4", 00:12:33.565 "trsvcid": "4420", 00:12:33.565 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:33.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:12:33.565 "prchk_reftag": false, 00:12:33.565 "prchk_guard": false, 00:12:33.565 "hdgst": false, 00:12:33.565 "ddgst": false, 00:12:33.565 "dhchap_key": "key1", 00:12:33.565 "dhchap_ctrlr_key": "ckey2", 00:12:33.565 "allow_unrecognized_csi": false, 00:12:33.565 "method": "bdev_nvme_attach_controller", 00:12:33.565 "req_id": 1 00:12:33.565 } 00:12:33.565 Got JSON-RPC error response 00:12:33.565 response: 00:12:33.565 { 00:12:33.565 "code": -5, 00:12:33.565 "message": "Input/output error" 00:12:33.565 } 00:12:33.565 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@658 -- # es=1 00:12:33.565 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:12:33.565 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:12:33.565 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:12:33.565 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:12:33.566 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:33.566 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.566 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:33.566 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 00:12:33.566 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:33.566 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.566 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:33.566 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.566 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # local es=0 00:12:33.566 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@657 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.566 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # local arg=bdev_connect 00:12:33.566 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:12:33.566 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # type -t bdev_connect 00:12:33.566 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:12:33.566 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@658 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.566 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.566 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.141 request: 00:12:34.141 { 00:12:34.141 "name": "nvme0", 00:12:34.141 "trtype": "tcp", 00:12:34.141 "traddr": "10.0.0.3", 00:12:34.141 "adrfam": "ipv4", 00:12:34.141 "trsvcid": "4420", 00:12:34.141 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:34.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:12:34.141 "prchk_reftag": false, 00:12:34.141 "prchk_guard": false, 00:12:34.141 "hdgst": false, 00:12:34.141 "ddgst": false, 00:12:34.141 "dhchap_key": "key1", 00:12:34.141 "dhchap_ctrlr_key": "ckey1", 00:12:34.141 "allow_unrecognized_csi": false, 00:12:34.141 "method": "bdev_nvme_attach_controller", 00:12:34.141 "req_id": 1 00:12:34.141 } 00:12:34.141 Got JSON-RPC error response 00:12:34.141 response: 00:12:34.142 { 00:12:34.142 "code": -5, 00:12:34.142 "message": "Input/output error" 00:12:34.142 } 00:12:34.142 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@658 -- # es=1 00:12:34.142 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:12:34.142 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:12:34.142 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:12:34.142 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:12:34.142 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:34.142 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.142 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:34.142 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67146 00:12:34.142 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' -z 67146 ']' 00:12:34.142 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@961 -- # kill -0 67146 00:12:34.142 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # uname 00:12:34.142 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:12:34.142 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 67146 00:12:34.142 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:12:34.142 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:12:34.142 killing process with pid 67146 00:12:34.142 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@975 -- # echo 'killing process with pid 67146' 00:12:34.142 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # kill 67146 00:12:34.142 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@981 -- # wait 67146 00:12:34.401 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:34.401 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:34.401 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:34.401 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.401 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70269 00:12:34.401 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:34.401 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70269 00:12:34.401 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # '[' -z 70269 ']' 00:12:34.401 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.401 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@843 -- # local max_retries=100 00:12:34.401 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.401 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@847 -- # xtrace_disable 00:12:34.401 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.337 08:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:12:35.337 08:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@871 -- # return 0 00:12:35.337 08:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:35.337 08:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@735 -- # xtrace_disable 00:12:35.337 08:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.596 08:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.596 08:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:35.596 08:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70269 00:12:35.596 08:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # '[' -z 70269 ']' 00:12:35.596 08:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.596 08:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@843 -- # local max_retries=100 00:12:35.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.596 08:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.596 08:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@847 -- # xtrace_disable 00:12:35.596 08:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@871 -- # return 0 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.855 null0 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pSR 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.AM6 ]] 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AM6 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.tz9 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.J0N ]] 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.J0N 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.CCX 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.5Be ]] 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5Be 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.DCk 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key3 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.855 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:35.856 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:35.856 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:35.856 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:36.791 nvme0n1 00:12:37.050 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.050 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.050 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:37.311 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.311 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.311 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:37.311 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.311 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:37.311 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:37.311 { 00:12:37.311 "cntlid": 1, 00:12:37.311 "qid": 0, 00:12:37.311 "state": "enabled", 00:12:37.311 "thread": "nvmf_tgt_poll_group_000", 00:12:37.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:12:37.311 "listen_address": { 00:12:37.311 "trtype": "TCP", 00:12:37.311 "adrfam": "IPv4", 00:12:37.311 "traddr": "10.0.0.3", 00:12:37.311 "trsvcid": "4420" 00:12:37.311 }, 00:12:37.311 "peer_address": { 00:12:37.311 "trtype": "TCP", 00:12:37.311 "adrfam": "IPv4", 00:12:37.311 "traddr": "10.0.0.1", 00:12:37.311 "trsvcid": "36872" 00:12:37.311 }, 00:12:37.311 "auth": { 00:12:37.311 "state": "completed", 00:12:37.311 "digest": "sha512", 00:12:37.311 "dhgroup": "ffdhe8192" 00:12:37.311 } 00:12:37.311 } 00:12:37.311 ]' 00:12:37.311 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:37.311 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:37.311 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:37.311 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:37.311 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:37.311 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.311 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.311 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.570 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:12:37.570 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:12:38.506 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.506 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:12:38.506 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:38.506 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.506 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:38.506 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key3 00:12:38.506 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:38.506 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.506 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:38.506 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:38.506 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:38.765 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:38.765 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # local es=0 00:12:38.765 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@657 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:38.765 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # local arg=bdev_connect 00:12:38.765 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:12:38.766 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # type -t bdev_connect 00:12:38.766 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:12:38.766 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@658 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:38.766 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:38.766 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:39.025 request: 00:12:39.025 { 00:12:39.025 "name": "nvme0", 00:12:39.025 "trtype": "tcp", 00:12:39.025 "traddr": "10.0.0.3", 00:12:39.025 "adrfam": "ipv4", 00:12:39.025 "trsvcid": "4420", 00:12:39.025 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:39.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:12:39.025 "prchk_reftag": false, 00:12:39.025 "prchk_guard": false, 00:12:39.025 "hdgst": false, 00:12:39.025 "ddgst": false, 00:12:39.025 "dhchap_key": "key3", 00:12:39.025 "allow_unrecognized_csi": false, 00:12:39.025 "method": "bdev_nvme_attach_controller", 00:12:39.025 "req_id": 1 00:12:39.025 } 00:12:39.025 Got JSON-RPC error response 00:12:39.025 response: 00:12:39.025 { 00:12:39.025 "code": -5, 00:12:39.025 "message": "Input/output error" 00:12:39.025 } 00:12:39.025 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@658 -- # es=1 00:12:39.025 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:12:39.025 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:12:39.025 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:12:39.025 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:12:39.025 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:12:39.025 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:39.025 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:39.285 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:39.285 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # local es=0 00:12:39.285 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@657 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:39.285 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # local arg=bdev_connect 00:12:39.285 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:12:39.285 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # type -t bdev_connect 00:12:39.285 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:12:39.285 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@658 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:39.285 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:39.285 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:39.544 request: 00:12:39.544 { 00:12:39.544 "name": "nvme0", 00:12:39.544 "trtype": "tcp", 00:12:39.544 "traddr": "10.0.0.3", 00:12:39.544 "adrfam": "ipv4", 00:12:39.544 "trsvcid": "4420", 00:12:39.544 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:39.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:12:39.544 "prchk_reftag": false, 00:12:39.544 "prchk_guard": false, 00:12:39.544 "hdgst": false, 00:12:39.544 "ddgst": false, 00:12:39.544 "dhchap_key": "key3", 00:12:39.544 "allow_unrecognized_csi": false, 00:12:39.544 "method": "bdev_nvme_attach_controller", 00:12:39.544 "req_id": 1 00:12:39.544 } 00:12:39.544 Got JSON-RPC error response 00:12:39.544 response: 00:12:39.544 { 00:12:39.544 "code": -5, 00:12:39.544 "message": "Input/output error" 00:12:39.544 } 00:12:39.544 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@658 -- # es=1 00:12:39.544 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:12:39.544 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:12:39.544 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:12:39.544 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:39.544 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:12:39.544 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:39.544 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:39.544 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:39.544 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:39.804 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:12:39.804 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:39.804 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.804 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:39.804 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:12:39.804 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:39.804 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.804 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:39.804 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:39.804 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # local es=0 00:12:39.804 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@657 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:39.804 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # local arg=bdev_connect 00:12:39.804 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:12:39.804 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # type -t bdev_connect 00:12:39.804 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:12:39.804 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@658 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:39.804 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:39.804 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:40.372 request: 00:12:40.372 { 00:12:40.372 "name": "nvme0", 00:12:40.372 "trtype": "tcp", 00:12:40.372 "traddr": "10.0.0.3", 00:12:40.372 "adrfam": "ipv4", 00:12:40.372 "trsvcid": "4420", 00:12:40.372 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:40.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:12:40.372 "prchk_reftag": false, 00:12:40.372 "prchk_guard": false, 00:12:40.372 "hdgst": false, 00:12:40.372 "ddgst": false, 00:12:40.372 "dhchap_key": "key0", 00:12:40.372 "dhchap_ctrlr_key": "key1", 00:12:40.372 "allow_unrecognized_csi": false, 00:12:40.372 "method": "bdev_nvme_attach_controller", 00:12:40.372 "req_id": 1 00:12:40.372 } 00:12:40.372 Got JSON-RPC error response 00:12:40.372 response: 00:12:40.372 { 00:12:40.372 "code": -5, 00:12:40.372 "message": "Input/output error" 00:12:40.372 } 00:12:40.372 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@658 -- # es=1 00:12:40.372 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:12:40.372 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:12:40.372 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:12:40.372 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:12:40.372 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:40.372 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:40.630 nvme0n1 00:12:40.630 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:12:40.630 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.630 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:12:40.888 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.888 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.889 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.147 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 00:12:41.147 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:41.147 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.147 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:41.147 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:41.147 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:41.147 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:42.083 nvme0n1 00:12:42.342 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:12:42.342 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.342 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:12:42.602 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.602 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:42.602 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:42.602 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.602 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:42.602 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:12:42.602 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.602 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:12:42.861 08:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.861 08:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:12:42.861 08:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid 3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -l 0 --dhchap-secret DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: --dhchap-ctrl-secret DHHC-1:03:MWEwNGQ2NGVmYWRmMTRiNzdkZTIzYzFiYTUzNmEwNGEyNzhjNzk2NjA5ZTgzZTNlZDQ2N2IwNjlkZGFiODFlN7m4HTc=: 00:12:43.429 08:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:12:43.429 08:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:12:43.429 08:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:12:43.429 08:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:12:43.429 08:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:12:43.429 08:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:12:43.429 08:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:12:43.429 08:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.429 08:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.687 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:12:43.687 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # local es=0 00:12:43.687 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@657 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:12:43.687 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # local arg=bdev_connect 00:12:43.687 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:12:43.687 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # type -t bdev_connect 00:12:43.687 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:12:43.687 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@658 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:43.687 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:43.687 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:44.255 request: 00:12:44.255 { 00:12:44.255 "name": "nvme0", 00:12:44.255 "trtype": "tcp", 00:12:44.255 "traddr": "10.0.0.3", 00:12:44.255 "adrfam": "ipv4", 00:12:44.255 "trsvcid": "4420", 00:12:44.255 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:44.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4", 00:12:44.255 "prchk_reftag": false, 00:12:44.255 "prchk_guard": false, 00:12:44.255 "hdgst": false, 00:12:44.255 "ddgst": false, 00:12:44.255 "dhchap_key": "key1", 00:12:44.255 "allow_unrecognized_csi": false, 00:12:44.255 "method": "bdev_nvme_attach_controller", 00:12:44.255 "req_id": 1 00:12:44.255 } 00:12:44.255 Got JSON-RPC error response 00:12:44.255 response: 00:12:44.255 { 00:12:44.255 "code": -5, 00:12:44.255 "message": "Input/output error" 00:12:44.255 } 00:12:44.514 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@658 -- # es=1 00:12:44.515 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:12:44.515 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:12:44.515 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:12:44.515 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:44.515 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:44.515 08:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:45.452 nvme0n1 00:12:45.452 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:12:45.452 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:12:45.452 08:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.710 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.710 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.710 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.969 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:12:45.969 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:45.969 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.969 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:45.969 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:12:45.969 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:45.969 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:46.537 nvme0n1 00:12:46.537 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:12:46.537 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.537 08:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:12:46.796 08:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.796 08:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.796 08:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.055 08:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:47.055 08:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:47.055 08:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.055 08:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:47.055 08:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: '' 2s 00:12:47.055 08:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:47.055 08:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:47.055 08:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: 00:12:47.055 08:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:12:47.055 08:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:47.055 08:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:47.055 08:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: ]] 00:12:47.055 08:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NzRkOWExNjhlODM3MGI4YzgxNzBhMmFiYThkNTVhYTnTVHSX: 00:12:47.055 08:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:12:47.055 08:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:47.055 08:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:48.959 08:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:12:48.959 08:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # local i=0 00:12:48.959 08:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1243 -- # lsblk -l -o NAME 00:12:48.959 08:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1243 -- # grep -q -w nvme0n1 00:12:48.959 08:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1249 -- # grep -q -w nvme0n1 00:12:48.959 08:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1249 -- # lsblk -l -o NAME 00:12:48.959 08:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1253 -- # return 0 00:12:48.959 08:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key1 --dhchap-ctrlr-key key2 00:12:48.959 08:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:48.959 08:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.959 08:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:48.959 08:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: 2s 00:12:48.959 08:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:48.959 08:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:48.959 08:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:12:48.959 08:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: 00:12:48.959 08:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:48.959 08:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:48.959 08:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:12:48.959 08:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: ]] 00:12:48.959 08:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NmUyZGRiYmZkYjJjZTZkYTNkYzhhZGU3M2FiZjY3ZGVkZjU4NTRlNTY5YzNlNzJjBwtJJw==: 00:12:48.959 08:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:48.959 08:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:50.862 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:12:50.862 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # local i=0 00:12:50.862 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1243 -- # lsblk -l -o NAME 00:12:50.862 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1243 -- # grep -q -w nvme0n1 00:12:51.120 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1249 -- # lsblk -l -o NAME 00:12:51.120 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1249 -- # grep -q -w nvme0n1 00:12:51.121 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1253 -- # return 0 00:12:51.121 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.121 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:51.121 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:51.121 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.121 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:51.121 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:51.121 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:51.121 08:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:52.053 nvme0n1 00:12:52.053 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:52.053 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:52.053 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.053 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:52.053 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:52.053 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:52.618 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:12:52.618 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.618 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:12:52.875 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.875 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:12:52.875 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:52.875 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.875 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:52.875 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:12:52.875 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:12:53.467 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:12:53.467 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:12:53.467 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.467 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.467 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:53.467 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:53.467 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.467 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:53.467 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:53.467 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # local es=0 00:12:53.467 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@657 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:53.467 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # local arg=hostrpc 00:12:53.467 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:12:53.467 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # type -t hostrpc 00:12:53.467 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:12:53.467 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@658 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:53.467 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:54.035 request: 00:12:54.035 { 00:12:54.035 "name": "nvme0", 00:12:54.035 "dhchap_key": "key1", 00:12:54.035 "dhchap_ctrlr_key": "key3", 00:12:54.035 "method": "bdev_nvme_set_keys", 00:12:54.035 "req_id": 1 00:12:54.035 } 00:12:54.035 Got JSON-RPC error response 00:12:54.035 response: 00:12:54.035 { 00:12:54.035 "code": -13, 00:12:54.035 "message": "Permission denied" 00:12:54.035 } 00:12:54.035 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@658 -- # es=1 00:12:54.035 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:12:54.035 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:12:54.035 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:12:54.293 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:54.293 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.293 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:54.293 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:12:54.293 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:12:55.668 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:55.668 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:55.668 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.668 08:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:12:55.668 08:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:55.668 08:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:55.668 08:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.668 08:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:55.668 08:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:55.668 08:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:55.668 08:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:56.603 nvme0n1 00:12:56.603 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:56.603 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@566 -- # xtrace_disable 00:12:56.603 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.603 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:12:56.603 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:56.603 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # local es=0 00:12:56.603 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@657 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:56.603 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # local arg=hostrpc 00:12:56.603 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:12:56.603 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # type -t hostrpc 00:12:56.603 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:12:56.603 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@658 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:56.603 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:57.168 request: 00:12:57.168 { 00:12:57.168 "name": "nvme0", 00:12:57.168 "dhchap_key": "key2", 00:12:57.168 "dhchap_ctrlr_key": "key0", 00:12:57.168 "method": "bdev_nvme_set_keys", 00:12:57.168 "req_id": 1 00:12:57.168 } 00:12:57.168 Got JSON-RPC error response 00:12:57.168 response: 00:12:57.168 { 00:12:57.168 "code": -13, 00:12:57.168 "message": "Permission denied" 00:12:57.168 } 00:12:57.168 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@658 -- # es=1 00:12:57.168 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:12:57.168 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:12:57.168 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:12:57.168 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:57.168 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:57.168 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.425 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:12:57.425 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:12:58.361 08:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:58.361 08:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.361 08:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:58.928 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:12:58.928 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:12:58.928 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:12:58.928 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67170 00:12:58.928 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' -z 67170 ']' 00:12:58.928 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@961 -- # kill -0 67170 00:12:58.928 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # uname 00:12:58.928 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:12:58.928 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 67170 00:12:58.928 killing process with pid 67170 00:12:58.928 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:12:58.928 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:12:58.928 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@975 -- # echo 'killing process with pid 67170' 00:12:58.928 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # kill 67170 00:12:58.928 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@981 -- # wait 67170 00:12:59.187 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:59.187 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:59.187 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:12:59.187 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:59.188 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:12:59.188 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:59.188 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:59.188 rmmod nvme_tcp 00:12:59.188 rmmod nvme_fabrics 00:12:59.188 rmmod nvme_keyring 00:12:59.188 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:59.188 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:12:59.188 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:12:59.188 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70269 ']' 00:12:59.188 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70269 00:12:59.188 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' -z 70269 ']' 00:12:59.188 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@961 -- # kill -0 70269 00:12:59.188 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # uname 00:12:59.188 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:12:59.188 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 70269 00:12:59.188 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:12:59.188 killing process with pid 70269 00:12:59.188 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:12:59.188 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@975 -- # echo 'killing process with pid 70269' 00:12:59.188 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # kill 70269 00:12:59.188 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@981 -- # wait 70269 00:12:59.447 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:59.447 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:59.447 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:59.447 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:12:59.447 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:12:59.447 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:59.447 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:59.447 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:59.447 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:59.447 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:59.447 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:59.447 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:59.447 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:59.738 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:59.738 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:59.738 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:59.738 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:59.738 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:59.738 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:59.738 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:59.738 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:59.739 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:59.739 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:59.739 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.739 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.739 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.739 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:12:59.739 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.pSR /tmp/spdk.key-sha256.tz9 /tmp/spdk.key-sha384.CCX /tmp/spdk.key-sha512.DCk /tmp/spdk.key-sha512.AM6 /tmp/spdk.key-sha384.J0N /tmp/spdk.key-sha256.5Be '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:59.739 00:12:59.739 real 3m15.614s 00:12:59.739 user 7m47.853s 00:12:59.739 sys 0m31.271s 00:12:59.739 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1133 -- # xtrace_disable 00:12:59.739 ************************************ 00:12:59.739 END TEST nvmf_auth_target 00:12:59.739 ************************************ 00:12:59.739 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.739 08:25:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:12:59.739 08:25:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:59.739 08:25:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # '[' 4 -le 1 ']' 00:12:59.739 08:25:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1114 -- # xtrace_disable 00:12:59.739 08:25:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:59.739 ************************************ 00:12:59.739 START TEST nvmf_bdevio_no_huge 00:12:59.739 ************************************ 00:12:59.739 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:00.005 * Looking for test storage... 00:13:00.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:00.005 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1638 -- # lcov --version 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:13:00.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.006 --rc genhtml_branch_coverage=1 00:13:00.006 --rc genhtml_function_coverage=1 00:13:00.006 --rc genhtml_legend=1 00:13:00.006 --rc geninfo_all_blocks=1 00:13:00.006 --rc geninfo_unexecuted_blocks=1 00:13:00.006 00:13:00.006 ' 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:13:00.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.006 --rc genhtml_branch_coverage=1 00:13:00.006 --rc genhtml_function_coverage=1 00:13:00.006 --rc genhtml_legend=1 00:13:00.006 --rc geninfo_all_blocks=1 00:13:00.006 --rc geninfo_unexecuted_blocks=1 00:13:00.006 00:13:00.006 ' 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:13:00.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.006 --rc genhtml_branch_coverage=1 00:13:00.006 --rc genhtml_function_coverage=1 00:13:00.006 --rc genhtml_legend=1 00:13:00.006 --rc geninfo_all_blocks=1 00:13:00.006 --rc geninfo_unexecuted_blocks=1 00:13:00.006 00:13:00.006 ' 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:13:00.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.006 --rc genhtml_branch_coverage=1 00:13:00.006 --rc genhtml_function_coverage=1 00:13:00.006 --rc genhtml_legend=1 00:13:00.006 --rc geninfo_all_blocks=1 00:13:00.006 --rc geninfo_unexecuted_blocks=1 00:13:00.006 00:13:00.006 ' 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.006 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:00.007 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:00.007 Cannot find device "nvmf_init_br" 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:00.007 Cannot find device "nvmf_init_br2" 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:00.007 Cannot find device "nvmf_tgt_br" 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:00.007 Cannot find device "nvmf_tgt_br2" 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:00.007 Cannot find device "nvmf_init_br" 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:13:00.007 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:00.266 Cannot find device "nvmf_init_br2" 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:00.266 Cannot find device "nvmf_tgt_br" 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:00.266 Cannot find device "nvmf_tgt_br2" 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:00.266 Cannot find device "nvmf_br" 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:00.266 Cannot find device "nvmf_init_if" 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:00.266 Cannot find device "nvmf_init_if2" 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:00.266 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:00.266 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:00.266 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:00.267 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:00.267 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:00.267 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:00.267 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:00.267 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:00.526 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:00.526 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:13:00.526 00:13:00.526 --- 10.0.0.3 ping statistics --- 00:13:00.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.526 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:00.526 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:00.526 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:13:00.526 00:13:00.526 --- 10.0.0.4 ping statistics --- 00:13:00.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.526 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:00.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:00.526 00:13:00.526 --- 10.0.0.1 ping statistics --- 00:13:00.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.526 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:00.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:13:00.526 00:13:00.526 --- 10.0.0.2 ping statistics --- 00:13:00.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.526 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=70935 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 70935 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # '[' -z 70935 ']' 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@843 -- # local max_retries=100 00:13:00.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@847 -- # xtrace_disable 00:13:00.526 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:00.526 [2024-11-20 08:25:47.950550] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:13:00.526 [2024-11-20 08:25:47.950656] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:00.784 [2024-11-20 08:25:48.115683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:00.784 [2024-11-20 08:25:48.198357] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.784 [2024-11-20 08:25:48.198430] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.784 [2024-11-20 08:25:48.198443] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.784 [2024-11-20 08:25:48.198454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.784 [2024-11-20 08:25:48.198463] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.784 [2024-11-20 08:25:48.199195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:00.784 [2024-11-20 08:25:48.199914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:00.784 [2024-11-20 08:25:48.200036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:00.784 [2024-11-20 08:25:48.200399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.784 [2024-11-20 08:25:48.206405] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@871 -- # return 0 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@735 -- # xtrace_disable 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@566 -- # xtrace_disable 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:01.720 [2024-11-20 08:25:48.961376] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@566 -- # xtrace_disable 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:01.720 Malloc0 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@566 -- # xtrace_disable 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@566 -- # xtrace_disable 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@566 -- # xtrace_disable 00:13:01.720 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:01.720 [2024-11-20 08:25:49.001541] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:01.720 08:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:13:01.720 08:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:01.720 08:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:01.720 08:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:13:01.720 08:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:13:01.720 08:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:01.720 08:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:01.720 { 00:13:01.720 "params": { 00:13:01.720 "name": "Nvme$subsystem", 00:13:01.720 "trtype": "$TEST_TRANSPORT", 00:13:01.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:01.720 "adrfam": "ipv4", 00:13:01.720 "trsvcid": "$NVMF_PORT", 00:13:01.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:01.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:01.720 "hdgst": ${hdgst:-false}, 00:13:01.720 "ddgst": ${ddgst:-false} 00:13:01.720 }, 00:13:01.720 "method": "bdev_nvme_attach_controller" 00:13:01.720 } 00:13:01.720 EOF 00:13:01.720 )") 00:13:01.720 08:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:13:01.720 08:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:13:01.720 08:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:13:01.720 08:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:01.720 "params": { 00:13:01.720 "name": "Nvme1", 00:13:01.720 "trtype": "tcp", 00:13:01.720 "traddr": "10.0.0.3", 00:13:01.720 "adrfam": "ipv4", 00:13:01.720 "trsvcid": "4420", 00:13:01.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:01.720 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:01.720 "hdgst": false, 00:13:01.720 "ddgst": false 00:13:01.720 }, 00:13:01.720 "method": "bdev_nvme_attach_controller" 00:13:01.720 }' 00:13:01.720 [2024-11-20 08:25:49.061145] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:13:01.721 [2024-11-20 08:25:49.061241] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid70971 ] 00:13:01.721 [2024-11-20 08:25:49.223818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:01.980 [2024-11-20 08:25:49.305771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.980 [2024-11-20 08:25:49.305904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.980 [2024-11-20 08:25:49.305914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.980 [2024-11-20 08:25:49.320006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:01.980 I/O targets: 00:13:01.980 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:01.980 00:13:01.980 00:13:01.980 CUnit - A unit testing framework for C - Version 2.1-3 00:13:01.980 http://cunit.sourceforge.net/ 00:13:01.980 00:13:01.980 00:13:01.980 Suite: bdevio tests on: Nvme1n1 00:13:01.980 Test: blockdev write read block ...passed 00:13:01.980 Test: blockdev write zeroes read block ...passed 00:13:02.239 Test: blockdev write zeroes read no split ...passed 00:13:02.239 Test: blockdev write zeroes read split ...passed 00:13:02.239 Test: blockdev write zeroes read split partial ...passed 00:13:02.239 Test: blockdev reset ...[2024-11-20 08:25:49.565913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:02.239 [2024-11-20 08:25:49.566016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x104a310 (9): Bad file descriptor 00:13:02.239 [2024-11-20 08:25:49.583832] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:13:02.239 passed 00:13:02.239 Test: blockdev write read 8 blocks ...passed 00:13:02.239 Test: blockdev write read size > 128k ...passed 00:13:02.239 Test: blockdev write read invalid size ...passed 00:13:02.239 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:02.239 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:02.239 Test: blockdev write read max offset ...passed 00:13:02.239 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:02.239 Test: blockdev writev readv 8 blocks ...passed 00:13:02.239 Test: blockdev writev readv 30 x 1block ...passed 00:13:02.239 Test: blockdev writev readv block ...passed 00:13:02.239 Test: blockdev writev readv size > 128k ...passed 00:13:02.239 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:02.239 Test: blockdev comparev and writev ...[2024-11-20 08:25:49.591608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:02.239 [2024-11-20 08:25:49.591653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:02.239 [2024-11-20 08:25:49.591673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:02.239 [2024-11-20 08:25:49.591684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:02.239 [2024-11-20 08:25:49.592076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:02.239 [2024-11-20 08:25:49.592104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:02.239 [2024-11-20 08:25:49.592123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:02.239 [2024-11-20 08:25:49.592133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:02.239 [2024-11-20 08:25:49.592458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:02.239 [2024-11-20 08:25:49.592484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:02.239 [2024-11-20 08:25:49.592502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:02.240 [2024-11-20 08:25:49.592512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:02.240 [2024-11-20 08:25:49.592879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:02.240 [2024-11-20 08:25:49.592904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:02.240 [2024-11-20 08:25:49.592922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:02.240 [2024-11-20 08:25:49.592932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:02.240 passed 00:13:02.240 Test: blockdev nvme passthru rw ...passed 00:13:02.240 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:25:49.593734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:02.240 [2024-11-20 08:25:49.593754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:02.240 [2024-11-20 08:25:49.593878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:02.240 [2024-11-20 08:25:49.593896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:02.240 passed 00:13:02.240 Test: blockdev nvme admin passthru ...[2024-11-20 08:25:49.594004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:02.240 [2024-11-20 08:25:49.594020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:02.240 [2024-11-20 08:25:49.594116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:02.240 [2024-11-20 08:25:49.594132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:02.240 passed 00:13:02.240 Test: blockdev copy ...passed 00:13:02.240 00:13:02.240 Run Summary: Type Total Ran Passed Failed Inactive 00:13:02.240 suites 1 1 n/a 0 0 00:13:02.240 tests 23 23 23 0 0 00:13:02.240 asserts 152 152 152 0 n/a 00:13:02.240 00:13:02.240 Elapsed time = 0.165 seconds 00:13:02.499 08:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.499 08:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@566 -- # xtrace_disable 00:13:02.499 08:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:02.499 08:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:13:02.499 08:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:02.499 08:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:02.499 08:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:02.499 08:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:13:02.499 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:02.499 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:13:02.499 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:02.499 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:02.499 rmmod nvme_tcp 00:13:02.499 rmmod nvme_fabrics 00:13:02.499 rmmod nvme_keyring 00:13:02.499 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:02.499 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:13:02.499 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:13:02.499 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 70935 ']' 00:13:02.499 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 70935 00:13:02.499 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' -z 70935 ']' 00:13:02.499 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@961 -- # kill -0 70935 00:13:02.499 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # uname 00:13:02.499 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:13:02.499 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 70935 00:13:02.758 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@963 -- # process_name=reactor_3 00:13:02.758 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # '[' reactor_3 = sudo ']' 00:13:02.758 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@975 -- # echo 'killing process with pid 70935' 00:13:02.758 killing process with pid 70935 00:13:02.758 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # kill 70935 00:13:02.758 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@981 -- # wait 70935 00:13:03.016 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:03.016 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:03.016 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:03.016 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:13:03.016 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:03.016 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:13:03.016 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:13:03.016 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:03.016 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:03.016 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:03.016 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:03.016 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:03.016 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:03.016 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:03.016 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:03.016 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:03.016 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:03.016 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:03.275 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:03.275 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:03.275 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:03.275 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:03.275 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:03.275 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.275 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:03.275 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.275 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:13:03.275 00:13:03.275 real 0m3.450s 00:13:03.275 user 0m10.456s 00:13:03.275 sys 0m1.372s 00:13:03.275 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1133 -- # xtrace_disable 00:13:03.275 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:03.275 ************************************ 00:13:03.275 END TEST nvmf_bdevio_no_huge 00:13:03.275 ************************************ 00:13:03.275 08:25:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:03.275 08:25:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:13:03.275 08:25:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1114 -- # xtrace_disable 00:13:03.275 08:25:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:03.275 ************************************ 00:13:03.275 START TEST nvmf_tls 00:13:03.275 ************************************ 00:13:03.275 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:03.534 * Looking for test storage... 00:13:03.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1638 -- # lcov --version 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:13:03.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.534 --rc genhtml_branch_coverage=1 00:13:03.534 --rc genhtml_function_coverage=1 00:13:03.534 --rc genhtml_legend=1 00:13:03.534 --rc geninfo_all_blocks=1 00:13:03.534 --rc geninfo_unexecuted_blocks=1 00:13:03.534 00:13:03.534 ' 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:13:03.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.534 --rc genhtml_branch_coverage=1 00:13:03.534 --rc genhtml_function_coverage=1 00:13:03.534 --rc genhtml_legend=1 00:13:03.534 --rc geninfo_all_blocks=1 00:13:03.534 --rc geninfo_unexecuted_blocks=1 00:13:03.534 00:13:03.534 ' 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:13:03.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.534 --rc genhtml_branch_coverage=1 00:13:03.534 --rc genhtml_function_coverage=1 00:13:03.534 --rc genhtml_legend=1 00:13:03.534 --rc geninfo_all_blocks=1 00:13:03.534 --rc geninfo_unexecuted_blocks=1 00:13:03.534 00:13:03.534 ' 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:13:03.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.534 --rc genhtml_branch_coverage=1 00:13:03.534 --rc genhtml_function_coverage=1 00:13:03.534 --rc genhtml_legend=1 00:13:03.534 --rc geninfo_all_blocks=1 00:13:03.534 --rc geninfo_unexecuted_blocks=1 00:13:03.534 00:13:03.534 ' 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.534 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:03.535 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.535 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:13:03.535 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:03.535 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:03.535 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:03.535 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.535 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.535 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:03.535 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:03.535 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:03.535 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:03.535 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:03.535 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:03.535 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:13:03.535 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:03.535 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.535 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:03.535 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:03.535 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:03.535 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.535 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:03.535 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:03.535 Cannot find device "nvmf_init_br" 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:03.535 Cannot find device "nvmf_init_br2" 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:03.535 Cannot find device "nvmf_tgt_br" 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:03.535 Cannot find device "nvmf_tgt_br2" 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:03.535 Cannot find device "nvmf_init_br" 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:03.535 Cannot find device "nvmf_init_br2" 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:03.535 Cannot find device "nvmf_tgt_br" 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:13:03.535 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:03.792 Cannot find device "nvmf_tgt_br2" 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:03.792 Cannot find device "nvmf_br" 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:03.792 Cannot find device "nvmf_init_if" 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:03.792 Cannot find device "nvmf_init_if2" 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:03.792 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:03.792 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:03.792 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:03.793 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:03.793 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:03.793 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:03.793 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:03.793 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:03.793 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:03.793 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:03.793 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:04.050 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:04.051 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:04.051 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:13:04.051 00:13:04.051 --- 10.0.0.3 ping statistics --- 00:13:04.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.051 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:04.051 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:04.051 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:13:04.051 00:13:04.051 --- 10.0.0.4 ping statistics --- 00:13:04.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.051 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:04.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:04.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:13:04.051 00:13:04.051 --- 10.0.0.1 ping statistics --- 00:13:04.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.051 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:04.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:04.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:13:04.051 00:13:04.051 --- 10.0.0.2 ping statistics --- 00:13:04.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.051 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71210 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71210 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # '[' -z 71210 ']' 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # local max_retries=100 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@847 -- # xtrace_disable 00:13:04.051 08:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:04.051 [2024-11-20 08:25:51.474316] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:13:04.051 [2024-11-20 08:25:51.474430] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.309 [2024-11-20 08:25:51.630061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.309 [2024-11-20 08:25:51.693825] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.309 [2024-11-20 08:25:51.693886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.309 [2024-11-20 08:25:51.693900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.309 [2024-11-20 08:25:51.693911] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.309 [2024-11-20 08:25:51.693920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.309 [2024-11-20 08:25:51.694370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.243 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:13:05.243 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@871 -- # return 0 00:13:05.243 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:05.243 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@735 -- # xtrace_disable 00:13:05.243 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:05.243 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:05.243 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:13:05.243 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:05.501 true 00:13:05.501 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:05.501 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:13:05.760 08:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:13:05.760 08:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:13:05.760 08:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:06.023 08:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:06.023 08:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:13:06.293 08:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:13:06.293 08:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:13:06.293 08:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:06.550 08:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:06.550 08:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:13:06.808 08:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:13:06.808 08:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:13:06.808 08:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:06.808 08:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:13:07.066 08:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:13:07.066 08:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:13:07.066 08:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:07.324 08:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:07.324 08:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:13:07.582 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:13:07.582 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:13:07.582 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:07.839 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:07.839 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.oYS173iy53 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.Oym7EUxDrM 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.oYS173iy53 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.Oym7EUxDrM 00:13:08.405 08:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:08.664 08:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:08.923 [2024-11-20 08:25:56.372072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:08.923 08:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.oYS173iy53 00:13:08.923 08:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.oYS173iy53 00:13:08.923 08:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:09.181 [2024-11-20 08:25:56.654554] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.181 08:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:09.439 08:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:09.697 [2024-11-20 08:25:57.234700] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:09.697 [2024-11-20 08:25:57.234996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:09.697 08:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:09.955 malloc0 00:13:09.955 08:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:10.213 08:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.oYS173iy53 00:13:10.471 08:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:10.729 08:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.oYS173iy53 00:13:23.000 Initializing NVMe Controllers 00:13:23.000 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:23.000 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:23.000 Initialization complete. Launching workers. 00:13:23.000 ======================================================== 00:13:23.000 Latency(us) 00:13:23.000 Device Information : IOPS MiB/s Average min max 00:13:23.000 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9569.10 37.38 6689.76 1000.66 13437.82 00:13:23.000 ======================================================== 00:13:23.000 Total : 9569.10 37.38 6689.76 1000.66 13437.82 00:13:23.000 00:13:23.000 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oYS173iy53 00:13:23.000 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:23.000 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:23.000 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:23.000 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.oYS173iy53 00:13:23.000 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:23.000 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71454 00:13:23.000 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:23.000 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:23.000 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71454 /var/tmp/bdevperf.sock 00:13:23.000 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # '[' -z 71454 ']' 00:13:23.000 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:23.000 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # local max_retries=100 00:13:23.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:23.000 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:23.000 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@847 -- # xtrace_disable 00:13:23.000 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:23.000 [2024-11-20 08:26:08.542676] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:13:23.000 [2024-11-20 08:26:08.543505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71454 ] 00:13:23.000 [2024-11-20 08:26:08.694518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.000 [2024-11-20 08:26:08.749334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.001 [2024-11-20 08:26:08.808040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:23.001 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:13:23.001 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@871 -- # return 0 00:13:23.001 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oYS173iy53 00:13:23.001 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:23.001 [2024-11-20 08:26:09.358337] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:23.001 TLSTESTn1 00:13:23.001 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:23.001 Running I/O for 10 seconds... 00:13:24.191 3987.00 IOPS, 15.57 MiB/s [2024-11-20T08:26:12.686Z] 4033.50 IOPS, 15.76 MiB/s [2024-11-20T08:26:13.622Z] 4053.67 IOPS, 15.83 MiB/s [2024-11-20T08:26:14.996Z] 4073.50 IOPS, 15.91 MiB/s [2024-11-20T08:26:15.578Z] 4071.40 IOPS, 15.90 MiB/s [2024-11-20T08:26:17.001Z] 4073.83 IOPS, 15.91 MiB/s [2024-11-20T08:26:17.567Z] 4085.57 IOPS, 15.96 MiB/s [2024-11-20T08:26:18.944Z] 4078.62 IOPS, 15.93 MiB/s [2024-11-20T08:26:19.879Z] 4074.11 IOPS, 15.91 MiB/s [2024-11-20T08:26:19.879Z] 4077.20 IOPS, 15.93 MiB/s 00:13:32.318 Latency(us) 00:13:32.318 [2024-11-20T08:26:19.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.318 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:32.318 Verification LBA range: start 0x0 length 0x2000 00:13:32.318 TLSTESTn1 : 10.02 4082.60 15.95 0.00 0.00 31294.47 6404.65 24784.52 00:13:32.318 [2024-11-20T08:26:19.879Z] =================================================================================================================== 00:13:32.318 [2024-11-20T08:26:19.879Z] Total : 4082.60 15.95 0.00 0.00 31294.47 6404.65 24784.52 00:13:32.318 { 00:13:32.318 "results": [ 00:13:32.318 { 00:13:32.318 "job": "TLSTESTn1", 00:13:32.318 "core_mask": "0x4", 00:13:32.318 "workload": "verify", 00:13:32.318 "status": "finished", 00:13:32.318 "verify_range": { 00:13:32.318 "start": 0, 00:13:32.318 "length": 8192 00:13:32.318 }, 00:13:32.318 "queue_depth": 128, 00:13:32.318 "io_size": 4096, 00:13:32.318 "runtime": 10.01764, 00:13:32.318 "iops": 4082.598296604789, 00:13:32.318 "mibps": 15.947649596112457, 00:13:32.318 "io_failed": 0, 00:13:32.318 "io_timeout": 0, 00:13:32.318 "avg_latency_us": 31294.47174353936, 00:13:32.318 "min_latency_us": 6404.654545454546, 00:13:32.318 "max_latency_us": 24784.523636363636 00:13:32.318 } 00:13:32.318 ], 00:13:32.318 "core_count": 1 00:13:32.318 } 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71454 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' -z 71454 ']' 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # kill -0 71454 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # uname 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 71454 00:13:32.318 killing process with pid 71454 00:13:32.318 Received shutdown signal, test time was about 10.000000 seconds 00:13:32.318 00:13:32.318 Latency(us) 00:13:32.318 [2024-11-20T08:26:19.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.318 [2024-11-20T08:26:19.879Z] =================================================================================================================== 00:13:32.318 [2024-11-20T08:26:19.879Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # process_name=reactor_2 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # '[' reactor_2 = sudo ']' 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # echo 'killing process with pid 71454' 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # kill 71454 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@981 -- # wait 71454 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Oym7EUxDrM 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # local es=0 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@657 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Oym7EUxDrM 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # local arg=run_bdevperf 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@647 -- # type -t run_bdevperf 00:13:32.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@658 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Oym7EUxDrM 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Oym7EUxDrM 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71581 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71581 /var/tmp/bdevperf.sock 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # '[' -z 71581 ']' 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # local max_retries=100 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@847 -- # xtrace_disable 00:13:32.318 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:32.577 [2024-11-20 08:26:19.878099] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:13:32.577 [2024-11-20 08:26:19.878617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71581 ] 00:13:32.577 [2024-11-20 08:26:20.025648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.577 [2024-11-20 08:26:20.078201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.577 [2024-11-20 08:26:20.134225] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:32.836 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:13:32.836 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@871 -- # return 0 00:13:32.836 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Oym7EUxDrM 00:13:33.094 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:33.353 [2024-11-20 08:26:20.845505] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:33.353 [2024-11-20 08:26:20.855961] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spd[2024-11-20 08:26:20.856452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1800fb0 (107): Transport endpoint is not connected 00:13:33.353 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:33.353 [2024-11-20 08:26:20.857434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1800fb0 (9): Bad file descriptor 00:13:33.353 [2024-11-20 08:26:20.858431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:33.353 [2024-11-20 08:26:20.858459] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:33.353 [2024-11-20 08:26:20.858470] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:33.353 [2024-11-20 08:26:20.858486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:33.353 request: 00:13:33.353 { 00:13:33.353 "name": "TLSTEST", 00:13:33.353 "trtype": "tcp", 00:13:33.353 "traddr": "10.0.0.3", 00:13:33.353 "adrfam": "ipv4", 00:13:33.353 "trsvcid": "4420", 00:13:33.353 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:33.353 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:33.353 "prchk_reftag": false, 00:13:33.353 "prchk_guard": false, 00:13:33.353 "hdgst": false, 00:13:33.353 "ddgst": false, 00:13:33.353 "psk": "key0", 00:13:33.353 "allow_unrecognized_csi": false, 00:13:33.353 "method": "bdev_nvme_attach_controller", 00:13:33.353 "req_id": 1 00:13:33.353 } 00:13:33.353 Got JSON-RPC error response 00:13:33.354 response: 00:13:33.354 { 00:13:33.354 "code": -5, 00:13:33.354 "message": "Input/output error" 00:13:33.354 } 00:13:33.354 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71581 00:13:33.354 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' -z 71581 ']' 00:13:33.354 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # kill -0 71581 00:13:33.354 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # uname 00:13:33.354 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:13:33.354 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 71581 00:13:33.354 killing process with pid 71581 00:13:33.354 Received shutdown signal, test time was about 10.000000 seconds 00:13:33.354 00:13:33.354 Latency(us) 00:13:33.354 [2024-11-20T08:26:20.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.354 [2024-11-20T08:26:20.915Z] =================================================================================================================== 00:13:33.354 [2024-11-20T08:26:20.915Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:33.354 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # process_name=reactor_2 00:13:33.354 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # '[' reactor_2 = sudo ']' 00:13:33.354 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # echo 'killing process with pid 71581' 00:13:33.354 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # kill 71581 00:13:33.354 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@981 -- # wait 71581 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@658 -- # es=1 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.oYS173iy53 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # local es=0 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@657 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.oYS173iy53 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # local arg=run_bdevperf 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@647 -- # type -t run_bdevperf 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@658 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.oYS173iy53 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.oYS173iy53 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71608 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71608 /var/tmp/bdevperf.sock 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # '[' -z 71608 ']' 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # local max_retries=100 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:33.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@847 -- # xtrace_disable 00:13:33.613 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:33.613 [2024-11-20 08:26:21.157741] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:13:33.613 [2024-11-20 08:26:21.158224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71608 ] 00:13:33.872 [2024-11-20 08:26:21.308565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.872 [2024-11-20 08:26:21.375569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.130 [2024-11-20 08:26:21.435188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:34.761 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:13:34.761 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@871 -- # return 0 00:13:34.761 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oYS173iy53 00:13:35.019 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:13:35.278 [2024-11-20 08:26:22.729313] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:35.278 [2024-11-20 08:26:22.736065] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:35.278 [2024-11-20 08:26:22.736434] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:35.278 [2024-11-20 08:26:22.736674] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:35.278 [2024-11-20 08:26:22.737506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61bfb0 (107): Transport endpoint is not connected 00:13:35.278 [2024-11-20 08:26:22.738501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61bfb0 (9): Bad file descriptor 00:13:35.278 [2024-11-20 08:26:22.739497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:35.278 [2024-11-20 08:26:22.739541] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:35.278 [2024-11-20 08:26:22.739569] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:35.278 [2024-11-20 08:26:22.739585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:35.278 request: 00:13:35.278 { 00:13:35.278 "name": "TLSTEST", 00:13:35.278 "trtype": "tcp", 00:13:35.278 "traddr": "10.0.0.3", 00:13:35.278 "adrfam": "ipv4", 00:13:35.278 "trsvcid": "4420", 00:13:35.278 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:35.278 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:35.278 "prchk_reftag": false, 00:13:35.278 "prchk_guard": false, 00:13:35.278 "hdgst": false, 00:13:35.278 "ddgst": false, 00:13:35.278 "psk": "key0", 00:13:35.278 "allow_unrecognized_csi": false, 00:13:35.278 "method": "bdev_nvme_attach_controller", 00:13:35.278 "req_id": 1 00:13:35.278 } 00:13:35.278 Got JSON-RPC error response 00:13:35.278 response: 00:13:35.278 { 00:13:35.278 "code": -5, 00:13:35.278 "message": "Input/output error" 00:13:35.278 } 00:13:35.278 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71608 00:13:35.278 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' -z 71608 ']' 00:13:35.278 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # kill -0 71608 00:13:35.278 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # uname 00:13:35.278 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:13:35.278 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 71608 00:13:35.278 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # process_name=reactor_2 00:13:35.278 killing process with pid 71608 00:13:35.278 Received shutdown signal, test time was about 10.000000 seconds 00:13:35.278 00:13:35.278 Latency(us) 00:13:35.278 [2024-11-20T08:26:22.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.278 [2024-11-20T08:26:22.839Z] =================================================================================================================== 00:13:35.278 [2024-11-20T08:26:22.839Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:35.278 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # '[' reactor_2 = sudo ']' 00:13:35.278 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # echo 'killing process with pid 71608' 00:13:35.278 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # kill 71608 00:13:35.278 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@981 -- # wait 71608 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@658 -- # es=1 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.oYS173iy53 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # local es=0 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@657 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.oYS173iy53 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # local arg=run_bdevperf 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@647 -- # type -t run_bdevperf 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@658 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.oYS173iy53 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.oYS173iy53 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71639 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71639 /var/tmp/bdevperf.sock 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # '[' -z 71639 ']' 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # local max_retries=100 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:35.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@847 -- # xtrace_disable 00:13:35.537 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:35.537 [2024-11-20 08:26:23.040716] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:13:35.537 [2024-11-20 08:26:23.041011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71639 ] 00:13:35.796 [2024-11-20 08:26:23.184234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.796 [2024-11-20 08:26:23.235086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.796 [2024-11-20 08:26:23.291669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:36.733 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:13:36.733 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@871 -- # return 0 00:13:36.733 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oYS173iy53 00:13:36.733 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:36.992 [2024-11-20 08:26:24.531822] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:36.992 [2024-11-20 08:26:24.536762] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:36.992 [2024-11-20 08:26:24.536910] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:36.992 [2024-11-20 08:26:24.537012] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:36.992 [2024-11-20 08:26:24.537465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5cfb0 (107): Transport endpoint is not connected 00:13:36.992 [2024-11-20 08:26:24.538455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5cfb0 (9): Bad file descriptor 00:13:36.992 [2024-11-20 08:26:24.539451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:13:36.992 [2024-11-20 08:26:24.539645] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:36.992 [2024-11-20 08:26:24.539662] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:13:36.992 [2024-11-20 08:26:24.539682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:13:36.992 request: 00:13:36.992 { 00:13:36.992 "name": "TLSTEST", 00:13:36.992 "trtype": "tcp", 00:13:36.992 "traddr": "10.0.0.3", 00:13:36.992 "adrfam": "ipv4", 00:13:36.992 "trsvcid": "4420", 00:13:36.992 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:36.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:36.992 "prchk_reftag": false, 00:13:36.992 "prchk_guard": false, 00:13:36.992 "hdgst": false, 00:13:36.992 "ddgst": false, 00:13:36.992 "psk": "key0", 00:13:36.992 "allow_unrecognized_csi": false, 00:13:36.992 "method": "bdev_nvme_attach_controller", 00:13:36.992 "req_id": 1 00:13:36.992 } 00:13:36.992 Got JSON-RPC error response 00:13:36.992 response: 00:13:36.992 { 00:13:36.992 "code": -5, 00:13:36.992 "message": "Input/output error" 00:13:36.992 } 00:13:37.251 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71639 00:13:37.251 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' -z 71639 ']' 00:13:37.251 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # kill -0 71639 00:13:37.251 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # uname 00:13:37.251 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:13:37.251 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 71639 00:13:37.251 killing process with pid 71639 00:13:37.251 Received shutdown signal, test time was about 10.000000 seconds 00:13:37.251 00:13:37.251 Latency(us) 00:13:37.251 [2024-11-20T08:26:24.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.251 [2024-11-20T08:26:24.812Z] =================================================================================================================== 00:13:37.251 [2024-11-20T08:26:24.812Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:37.251 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # process_name=reactor_2 00:13:37.251 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # '[' reactor_2 = sudo ']' 00:13:37.251 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # echo 'killing process with pid 71639' 00:13:37.251 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # kill 71639 00:13:37.251 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@981 -- # wait 71639 00:13:37.251 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:37.251 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@658 -- # es=1 00:13:37.251 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:13:37.251 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # local es=0 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@657 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # local arg=run_bdevperf 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@647 -- # type -t run_bdevperf 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@658 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71668 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71668 /var/tmp/bdevperf.sock 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # '[' -z 71668 ']' 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # local max_retries=100 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:37.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@847 -- # xtrace_disable 00:13:37.252 08:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:37.510 [2024-11-20 08:26:24.827949] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:13:37.510 [2024-11-20 08:26:24.828251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71668 ] 00:13:37.510 [2024-11-20 08:26:24.968954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.510 [2024-11-20 08:26:25.020083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.768 [2024-11-20 08:26:25.074335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:37.768 08:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:13:37.768 08:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@871 -- # return 0 00:13:37.768 08:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:13:38.065 [2024-11-20 08:26:25.480839] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:13:38.065 [2024-11-20 08:26:25.481136] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:38.065 request: 00:13:38.065 { 00:13:38.065 "name": "key0", 00:13:38.065 "path": "", 00:13:38.065 "method": "keyring_file_add_key", 00:13:38.065 "req_id": 1 00:13:38.065 } 00:13:38.065 Got JSON-RPC error response 00:13:38.065 response: 00:13:38.065 { 00:13:38.065 "code": -1, 00:13:38.065 "message": "Operation not permitted" 00:13:38.065 } 00:13:38.065 08:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:38.323 [2024-11-20 08:26:25.793096] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:38.323 [2024-11-20 08:26:25.793179] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:38.323 request: 00:13:38.323 { 00:13:38.323 "name": "TLSTEST", 00:13:38.323 "trtype": "tcp", 00:13:38.323 "traddr": "10.0.0.3", 00:13:38.323 "adrfam": "ipv4", 00:13:38.323 "trsvcid": "4420", 00:13:38.323 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.323 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:38.323 "prchk_reftag": false, 00:13:38.323 "prchk_guard": false, 00:13:38.323 "hdgst": false, 00:13:38.323 "ddgst": false, 00:13:38.323 "psk": "key0", 00:13:38.323 "allow_unrecognized_csi": false, 00:13:38.323 "method": "bdev_nvme_attach_controller", 00:13:38.323 "req_id": 1 00:13:38.323 } 00:13:38.323 Got JSON-RPC error response 00:13:38.323 response: 00:13:38.323 { 00:13:38.323 "code": -126, 00:13:38.323 "message": "Required key not available" 00:13:38.323 } 00:13:38.323 08:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71668 00:13:38.323 08:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' -z 71668 ']' 00:13:38.323 08:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # kill -0 71668 00:13:38.323 08:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # uname 00:13:38.323 08:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:13:38.323 08:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 71668 00:13:38.323 killing process with pid 71668 00:13:38.323 Received shutdown signal, test time was about 10.000000 seconds 00:13:38.323 00:13:38.323 Latency(us) 00:13:38.323 [2024-11-20T08:26:25.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.323 [2024-11-20T08:26:25.884Z] =================================================================================================================== 00:13:38.323 [2024-11-20T08:26:25.884Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:38.323 08:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # process_name=reactor_2 00:13:38.324 08:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # '[' reactor_2 = sudo ']' 00:13:38.324 08:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # echo 'killing process with pid 71668' 00:13:38.324 08:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # kill 71668 00:13:38.324 08:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@981 -- # wait 71668 00:13:38.583 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:38.583 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@658 -- # es=1 00:13:38.583 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:13:38.583 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:13:38.583 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:13:38.583 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71210 00:13:38.583 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' -z 71210 ']' 00:13:38.583 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # kill -0 71210 00:13:38.583 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # uname 00:13:38.583 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:13:38.583 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 71210 00:13:38.583 killing process with pid 71210 00:13:38.583 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:13:38.583 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:13:38.583 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # echo 'killing process with pid 71210' 00:13:38.583 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # kill 71210 00:13:38.583 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@981 -- # wait 71210 00:13:38.842 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:38.842 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:38.842 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:38.842 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:38.842 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:38.842 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:13:38.842 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:39.101 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:39.101 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:13:39.101 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.jQCv2Cn94U 00:13:39.101 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:39.101 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.jQCv2Cn94U 00:13:39.101 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:13:39.101 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:39.101 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:39.101 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:39.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.101 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71710 00:13:39.101 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71710 00:13:39.101 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # '[' -z 71710 ']' 00:13:39.101 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.101 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:39.101 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # local max_retries=100 00:13:39.101 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.101 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@847 -- # xtrace_disable 00:13:39.101 08:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:39.101 [2024-11-20 08:26:26.491010] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:13:39.101 [2024-11-20 08:26:26.491108] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.101 [2024-11-20 08:26:26.641289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.359 [2024-11-20 08:26:26.722203] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.359 [2024-11-20 08:26:26.722285] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.359 [2024-11-20 08:26:26.722296] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.359 [2024-11-20 08:26:26.722304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.359 [2024-11-20 08:26:26.722311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.359 [2024-11-20 08:26:26.722864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.359 [2024-11-20 08:26:26.801404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:40.296 08:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:13:40.296 08:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@871 -- # return 0 00:13:40.296 08:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:40.296 08:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@735 -- # xtrace_disable 00:13:40.296 08:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:40.296 08:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.296 08:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.jQCv2Cn94U 00:13:40.296 08:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.jQCv2Cn94U 00:13:40.296 08:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:40.296 [2024-11-20 08:26:27.785688] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.296 08:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:40.555 08:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:40.814 [2024-11-20 08:26:28.317846] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:40.814 [2024-11-20 08:26:28.318158] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:40.814 08:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:41.073 malloc0 00:13:41.073 08:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:41.332 08:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.jQCv2Cn94U 00:13:41.590 08:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:41.861 08:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jQCv2Cn94U 00:13:41.861 08:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:41.861 08:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:41.861 08:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:41.861 08:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.jQCv2Cn94U 00:13:41.861 08:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:41.861 08:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:41.861 08:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71766 00:13:41.861 08:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:41.861 08:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71766 /var/tmp/bdevperf.sock 00:13:41.861 08:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # '[' -z 71766 ']' 00:13:41.861 08:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:41.861 08:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # local max_retries=100 00:13:41.861 08:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:41.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:41.861 08:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@847 -- # xtrace_disable 00:13:41.861 08:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:42.121 [2024-11-20 08:26:29.418615] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:13:42.121 [2024-11-20 08:26:29.419061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71766 ] 00:13:42.121 [2024-11-20 08:26:29.557702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.121 [2024-11-20 08:26:29.620052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.121 [2024-11-20 08:26:29.678522] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:42.380 08:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:13:42.380 08:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@871 -- # return 0 00:13:42.380 08:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jQCv2Cn94U 00:13:42.639 08:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:42.897 [2024-11-20 08:26:30.269874] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:42.897 TLSTESTn1 00:13:42.897 08:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:43.156 Running I/O for 10 seconds... 00:13:45.027 3908.00 IOPS, 15.27 MiB/s [2024-11-20T08:26:33.524Z] 3953.50 IOPS, 15.44 MiB/s [2024-11-20T08:26:34.509Z] 3961.33 IOPS, 15.47 MiB/s [2024-11-20T08:26:35.885Z] 3948.50 IOPS, 15.42 MiB/s [2024-11-20T08:26:36.820Z] 3957.80 IOPS, 15.46 MiB/s [2024-11-20T08:26:37.755Z] 3957.83 IOPS, 15.46 MiB/s [2024-11-20T08:26:38.691Z] 3959.14 IOPS, 15.47 MiB/s [2024-11-20T08:26:39.626Z] 3951.38 IOPS, 15.44 MiB/s [2024-11-20T08:26:40.561Z] 3952.44 IOPS, 15.44 MiB/s [2024-11-20T08:26:40.561Z] 3955.50 IOPS, 15.45 MiB/s 00:13:53.000 Latency(us) 00:13:53.000 [2024-11-20T08:26:40.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.001 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:53.001 Verification LBA range: start 0x0 length 0x2000 00:13:53.001 TLSTESTn1 : 10.02 3962.17 15.48 0.00 0.00 32250.67 4915.20 29669.93 00:13:53.001 [2024-11-20T08:26:40.562Z] =================================================================================================================== 00:13:53.001 [2024-11-20T08:26:40.562Z] Total : 3962.17 15.48 0.00 0.00 32250.67 4915.20 29669.93 00:13:53.001 { 00:13:53.001 "results": [ 00:13:53.001 { 00:13:53.001 "job": "TLSTESTn1", 00:13:53.001 "core_mask": "0x4", 00:13:53.001 "workload": "verify", 00:13:53.001 "status": "finished", 00:13:53.001 "verify_range": { 00:13:53.001 "start": 0, 00:13:53.001 "length": 8192 00:13:53.001 }, 00:13:53.001 "queue_depth": 128, 00:13:53.001 "io_size": 4096, 00:13:53.001 "runtime": 10.015481, 00:13:53.001 "iops": 3962.1661705513693, 00:13:53.001 "mibps": 15.477211603716286, 00:13:53.001 "io_failed": 0, 00:13:53.001 "io_timeout": 0, 00:13:53.001 "avg_latency_us": 32250.672871964864, 00:13:53.001 "min_latency_us": 4915.2, 00:13:53.001 "max_latency_us": 29669.934545454544 00:13:53.001 } 00:13:53.001 ], 00:13:53.001 "core_count": 1 00:13:53.001 } 00:13:53.001 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:53.001 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71766 00:13:53.001 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' -z 71766 ']' 00:13:53.001 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # kill -0 71766 00:13:53.001 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # uname 00:13:53.001 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:13:53.001 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 71766 00:13:53.259 killing process with pid 71766 00:13:53.259 Received shutdown signal, test time was about 10.000000 seconds 00:13:53.259 00:13:53.259 Latency(us) 00:13:53.259 [2024-11-20T08:26:40.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.259 [2024-11-20T08:26:40.820Z] =================================================================================================================== 00:13:53.259 [2024-11-20T08:26:40.820Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # process_name=reactor_2 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # '[' reactor_2 = sudo ']' 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # echo 'killing process with pid 71766' 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # kill 71766 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@981 -- # wait 71766 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.jQCv2Cn94U 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jQCv2Cn94U 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # local es=0 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@657 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jQCv2Cn94U 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # local arg=run_bdevperf 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@647 -- # type -t run_bdevperf 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@658 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jQCv2Cn94U 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.jQCv2Cn94U 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71894 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71894 /var/tmp/bdevperf.sock 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # '[' -z 71894 ']' 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # local max_retries=100 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:53.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@847 -- # xtrace_disable 00:13:53.259 08:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.516 [2024-11-20 08:26:40.831414] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:13:53.517 [2024-11-20 08:26:40.831765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71894 ] 00:13:53.517 [2024-11-20 08:26:40.979896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.517 [2024-11-20 08:26:41.041727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.785 [2024-11-20 08:26:41.096656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:53.785 08:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:13:53.785 08:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@871 -- # return 0 00:13:53.785 08:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jQCv2Cn94U 00:13:54.059 [2024-11-20 08:26:41.424352] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.jQCv2Cn94U': 0100666 00:13:54.059 [2024-11-20 08:26:41.424644] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:54.059 request: 00:13:54.059 { 00:13:54.059 "name": "key0", 00:13:54.059 "path": "/tmp/tmp.jQCv2Cn94U", 00:13:54.059 "method": "keyring_file_add_key", 00:13:54.059 "req_id": 1 00:13:54.059 } 00:13:54.059 Got JSON-RPC error response 00:13:54.059 response: 00:13:54.059 { 00:13:54.059 "code": -1, 00:13:54.059 "message": "Operation not permitted" 00:13:54.059 } 00:13:54.059 08:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:54.317 [2024-11-20 08:26:41.772566] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:54.317 [2024-11-20 08:26:41.772908] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:54.317 request: 00:13:54.317 { 00:13:54.317 "name": "TLSTEST", 00:13:54.317 "trtype": "tcp", 00:13:54.317 "traddr": "10.0.0.3", 00:13:54.317 "adrfam": "ipv4", 00:13:54.317 "trsvcid": "4420", 00:13:54.317 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:54.317 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:54.317 "prchk_reftag": false, 00:13:54.317 "prchk_guard": false, 00:13:54.317 "hdgst": false, 00:13:54.317 "ddgst": false, 00:13:54.317 "psk": "key0", 00:13:54.317 "allow_unrecognized_csi": false, 00:13:54.317 "method": "bdev_nvme_attach_controller", 00:13:54.317 "req_id": 1 00:13:54.317 } 00:13:54.317 Got JSON-RPC error response 00:13:54.317 response: 00:13:54.317 { 00:13:54.317 "code": -126, 00:13:54.317 "message": "Required key not available" 00:13:54.317 } 00:13:54.317 08:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71894 00:13:54.317 08:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' -z 71894 ']' 00:13:54.317 08:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # kill -0 71894 00:13:54.317 08:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # uname 00:13:54.317 08:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:13:54.317 08:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 71894 00:13:54.317 killing process with pid 71894 00:13:54.317 Received shutdown signal, test time was about 10.000000 seconds 00:13:54.317 00:13:54.317 Latency(us) 00:13:54.317 [2024-11-20T08:26:41.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.317 [2024-11-20T08:26:41.878Z] =================================================================================================================== 00:13:54.317 [2024-11-20T08:26:41.878Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:54.317 08:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # process_name=reactor_2 00:13:54.317 08:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # '[' reactor_2 = sudo ']' 00:13:54.317 08:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # echo 'killing process with pid 71894' 00:13:54.317 08:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # kill 71894 00:13:54.317 08:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@981 -- # wait 71894 00:13:54.575 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:54.575 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@658 -- # es=1 00:13:54.575 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:13:54.575 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:13:54.575 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:13:54.575 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71710 00:13:54.575 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' -z 71710 ']' 00:13:54.575 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # kill -0 71710 00:13:54.575 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # uname 00:13:54.575 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:13:54.575 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 71710 00:13:54.575 killing process with pid 71710 00:13:54.575 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:13:54.575 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:13:54.575 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # echo 'killing process with pid 71710' 00:13:54.575 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # kill 71710 00:13:54.575 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@981 -- # wait 71710 00:13:54.834 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:13:54.834 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:54.834 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:54.834 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:54.834 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71924 00:13:54.834 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:54.834 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71924 00:13:54.834 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # '[' -z 71924 ']' 00:13:54.834 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.834 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # local max_retries=100 00:13:54.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.834 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.834 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@847 -- # xtrace_disable 00:13:54.834 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:54.834 [2024-11-20 08:26:42.383719] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:13:54.834 [2024-11-20 08:26:42.384113] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.092 [2024-11-20 08:26:42.529362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.092 [2024-11-20 08:26:42.598999] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.092 [2024-11-20 08:26:42.599076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.092 [2024-11-20 08:26:42.599088] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.092 [2024-11-20 08:26:42.599096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.092 [2024-11-20 08:26:42.599103] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.092 [2024-11-20 08:26:42.599593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.350 [2024-11-20 08:26:42.675182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:55.350 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:13:55.350 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@871 -- # return 0 00:13:55.350 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:55.350 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@735 -- # xtrace_disable 00:13:55.350 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:55.350 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.350 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.jQCv2Cn94U 00:13:55.350 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # local es=0 00:13:55.350 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@657 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.jQCv2Cn94U 00:13:55.350 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # local arg=setup_nvmf_tgt 00:13:55.350 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:13:55.350 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@647 -- # type -t setup_nvmf_tgt 00:13:55.350 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:13:55.350 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@658 -- # setup_nvmf_tgt /tmp/tmp.jQCv2Cn94U 00:13:55.350 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.jQCv2Cn94U 00:13:55.350 08:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:55.608 [2024-11-20 08:26:43.019054] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:55.608 08:26:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:55.866 08:26:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:56.125 [2024-11-20 08:26:43.535180] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:56.125 [2024-11-20 08:26:43.535515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:56.125 08:26:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:56.383 malloc0 00:13:56.383 08:26:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:56.642 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.jQCv2Cn94U 00:13:56.901 [2024-11-20 08:26:44.341469] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.jQCv2Cn94U': 0100666 00:13:56.901 [2024-11-20 08:26:44.341552] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:56.901 request: 00:13:56.901 { 00:13:56.901 "name": "key0", 00:13:56.901 "path": "/tmp/tmp.jQCv2Cn94U", 00:13:56.901 "method": "keyring_file_add_key", 00:13:56.901 "req_id": 1 00:13:56.901 } 00:13:56.901 Got JSON-RPC error response 00:13:56.901 response: 00:13:56.901 { 00:13:56.901 "code": -1, 00:13:56.901 "message": "Operation not permitted" 00:13:56.901 } 00:13:56.901 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:57.159 [2024-11-20 08:26:44.589587] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:13:57.159 [2024-11-20 08:26:44.589711] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:57.159 request: 00:13:57.159 { 00:13:57.159 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:57.159 "host": "nqn.2016-06.io.spdk:host1", 00:13:57.159 "psk": "key0", 00:13:57.159 "method": "nvmf_subsystem_add_host", 00:13:57.159 "req_id": 1 00:13:57.159 } 00:13:57.159 Got JSON-RPC error response 00:13:57.159 response: 00:13:57.159 { 00:13:57.159 "code": -32603, 00:13:57.159 "message": "Internal error" 00:13:57.159 } 00:13:57.159 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@658 -- # es=1 00:13:57.159 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:13:57.159 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:13:57.159 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:13:57.159 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71924 00:13:57.159 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' -z 71924 ']' 00:13:57.159 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # kill -0 71924 00:13:57.159 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # uname 00:13:57.159 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:13:57.159 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 71924 00:13:57.159 killing process with pid 71924 00:13:57.159 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:13:57.159 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:13:57.159 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # echo 'killing process with pid 71924' 00:13:57.159 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # kill 71924 00:13:57.159 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@981 -- # wait 71924 00:13:57.418 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.jQCv2Cn94U 00:13:57.418 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:13:57.418 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:57.418 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:57.418 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.418 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71987 00:13:57.418 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71987 00:13:57.418 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:57.418 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # '[' -z 71987 ']' 00:13:57.418 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.418 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # local max_retries=100 00:13:57.418 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.418 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@847 -- # xtrace_disable 00:13:57.418 08:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.677 [2024-11-20 08:26:44.998120] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:13:57.677 [2024-11-20 08:26:44.998443] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.677 [2024-11-20 08:26:45.148307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.677 [2024-11-20 08:26:45.215668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.677 [2024-11-20 08:26:45.215995] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.677 [2024-11-20 08:26:45.216137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.677 [2024-11-20 08:26:45.216278] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.677 [2024-11-20 08:26:45.216322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.677 [2024-11-20 08:26:45.216885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.935 [2024-11-20 08:26:45.290947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:58.502 08:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:13:58.502 08:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@871 -- # return 0 00:13:58.502 08:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:58.502 08:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@735 -- # xtrace_disable 00:13:58.502 08:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:58.502 08:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.502 08:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.jQCv2Cn94U 00:13:58.502 08:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.jQCv2Cn94U 00:13:58.502 08:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:58.760 [2024-11-20 08:26:46.247682] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.760 08:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:59.328 08:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:59.328 [2024-11-20 08:26:46.839897] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:59.328 [2024-11-20 08:26:46.840511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:59.328 08:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:59.896 malloc0 00:13:59.896 08:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:59.896 08:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.jQCv2Cn94U 00:14:00.155 08:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:00.414 08:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:00.414 08:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72043 00:14:00.414 08:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:00.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:00.414 08:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72043 /var/tmp/bdevperf.sock 00:14:00.414 08:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # '[' -z 72043 ']' 00:14:00.414 08:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:00.414 08:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # local max_retries=100 00:14:00.414 08:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:00.414 08:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@847 -- # xtrace_disable 00:14:00.414 08:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:00.414 [2024-11-20 08:26:47.941682] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:14:00.414 [2024-11-20 08:26:47.942070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72043 ] 00:14:00.673 [2024-11-20 08:26:48.094179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.673 [2024-11-20 08:26:48.165769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.673 [2024-11-20 08:26:48.225831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:01.608 08:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:14:01.608 08:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@871 -- # return 0 00:14:01.608 08:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jQCv2Cn94U 00:14:01.608 08:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:01.868 [2024-11-20 08:26:49.384579] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:02.127 TLSTESTn1 00:14:02.127 08:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:02.386 08:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:14:02.386 "subsystems": [ 00:14:02.386 { 00:14:02.386 "subsystem": "keyring", 00:14:02.386 "config": [ 00:14:02.386 { 00:14:02.386 "method": "keyring_file_add_key", 00:14:02.386 "params": { 00:14:02.386 "name": "key0", 00:14:02.386 "path": "/tmp/tmp.jQCv2Cn94U" 00:14:02.386 } 00:14:02.386 } 00:14:02.386 ] 00:14:02.386 }, 00:14:02.386 { 00:14:02.386 "subsystem": "iobuf", 00:14:02.386 "config": [ 00:14:02.386 { 00:14:02.386 "method": "iobuf_set_options", 00:14:02.386 "params": { 00:14:02.386 "small_pool_count": 8192, 00:14:02.386 "large_pool_count": 1024, 00:14:02.386 "small_bufsize": 8192, 00:14:02.386 "large_bufsize": 135168, 00:14:02.386 "enable_numa": false 00:14:02.386 } 00:14:02.386 } 00:14:02.386 ] 00:14:02.386 }, 00:14:02.386 { 00:14:02.386 "subsystem": "sock", 00:14:02.386 "config": [ 00:14:02.386 { 00:14:02.386 "method": "sock_set_default_impl", 00:14:02.386 "params": { 00:14:02.386 "impl_name": "uring" 00:14:02.386 } 00:14:02.386 }, 00:14:02.386 { 00:14:02.386 "method": "sock_impl_set_options", 00:14:02.386 "params": { 00:14:02.386 "impl_name": "ssl", 00:14:02.386 "recv_buf_size": 4096, 00:14:02.386 "send_buf_size": 4096, 00:14:02.386 "enable_recv_pipe": true, 00:14:02.386 "enable_quickack": false, 00:14:02.386 "enable_placement_id": 0, 00:14:02.386 "enable_zerocopy_send_server": true, 00:14:02.386 "enable_zerocopy_send_client": false, 00:14:02.386 "zerocopy_threshold": 0, 00:14:02.386 "tls_version": 0, 00:14:02.386 "enable_ktls": false 00:14:02.386 } 00:14:02.386 }, 00:14:02.386 { 00:14:02.386 "method": "sock_impl_set_options", 00:14:02.386 "params": { 00:14:02.386 "impl_name": "posix", 00:14:02.386 "recv_buf_size": 2097152, 00:14:02.386 "send_buf_size": 2097152, 00:14:02.386 "enable_recv_pipe": true, 00:14:02.386 "enable_quickack": false, 00:14:02.386 "enable_placement_id": 0, 00:14:02.386 "enable_zerocopy_send_server": true, 00:14:02.386 "enable_zerocopy_send_client": false, 00:14:02.386 "zerocopy_threshold": 0, 00:14:02.386 "tls_version": 0, 00:14:02.386 "enable_ktls": false 00:14:02.386 } 00:14:02.386 }, 00:14:02.386 { 00:14:02.386 "method": "sock_impl_set_options", 00:14:02.386 "params": { 00:14:02.386 "impl_name": "uring", 00:14:02.386 "recv_buf_size": 2097152, 00:14:02.386 "send_buf_size": 2097152, 00:14:02.386 "enable_recv_pipe": true, 00:14:02.386 "enable_quickack": false, 00:14:02.386 "enable_placement_id": 0, 00:14:02.387 "enable_zerocopy_send_server": false, 00:14:02.387 "enable_zerocopy_send_client": false, 00:14:02.387 "zerocopy_threshold": 0, 00:14:02.387 "tls_version": 0, 00:14:02.387 "enable_ktls": false 00:14:02.387 } 00:14:02.387 } 00:14:02.387 ] 00:14:02.387 }, 00:14:02.387 { 00:14:02.387 "subsystem": "vmd", 00:14:02.387 "config": [] 00:14:02.387 }, 00:14:02.387 { 00:14:02.387 "subsystem": "accel", 00:14:02.387 "config": [ 00:14:02.387 { 00:14:02.387 "method": "accel_set_options", 00:14:02.387 "params": { 00:14:02.387 "small_cache_size": 128, 00:14:02.387 "large_cache_size": 16, 00:14:02.387 "task_count": 2048, 00:14:02.387 "sequence_count": 2048, 00:14:02.387 "buf_count": 2048 00:14:02.387 } 00:14:02.387 } 00:14:02.387 ] 00:14:02.387 }, 00:14:02.387 { 00:14:02.387 "subsystem": "bdev", 00:14:02.387 "config": [ 00:14:02.387 { 00:14:02.387 "method": "bdev_set_options", 00:14:02.387 "params": { 00:14:02.387 "bdev_io_pool_size": 65535, 00:14:02.387 "bdev_io_cache_size": 256, 00:14:02.387 "bdev_auto_examine": true, 00:14:02.387 "iobuf_small_cache_size": 128, 00:14:02.387 "iobuf_large_cache_size": 16 00:14:02.387 } 00:14:02.387 }, 00:14:02.387 { 00:14:02.387 "method": "bdev_raid_set_options", 00:14:02.387 "params": { 00:14:02.387 "process_window_size_kb": 1024, 00:14:02.387 "process_max_bandwidth_mb_sec": 0 00:14:02.387 } 00:14:02.387 }, 00:14:02.387 { 00:14:02.387 "method": "bdev_iscsi_set_options", 00:14:02.387 "params": { 00:14:02.387 "timeout_sec": 30 00:14:02.387 } 00:14:02.387 }, 00:14:02.387 { 00:14:02.387 "method": "bdev_nvme_set_options", 00:14:02.387 "params": { 00:14:02.387 "action_on_timeout": "none", 00:14:02.387 "timeout_us": 0, 00:14:02.387 "timeout_admin_us": 0, 00:14:02.387 "keep_alive_timeout_ms": 10000, 00:14:02.387 "arbitration_burst": 0, 00:14:02.387 "low_priority_weight": 0, 00:14:02.387 "medium_priority_weight": 0, 00:14:02.387 "high_priority_weight": 0, 00:14:02.387 "nvme_adminq_poll_period_us": 10000, 00:14:02.387 "nvme_ioq_poll_period_us": 0, 00:14:02.387 "io_queue_requests": 0, 00:14:02.387 "delay_cmd_submit": true, 00:14:02.387 "transport_retry_count": 4, 00:14:02.387 "bdev_retry_count": 3, 00:14:02.387 "transport_ack_timeout": 0, 00:14:02.387 "ctrlr_loss_timeout_sec": 0, 00:14:02.387 "reconnect_delay_sec": 0, 00:14:02.387 "fast_io_fail_timeout_sec": 0, 00:14:02.387 "disable_auto_failback": false, 00:14:02.387 "generate_uuids": false, 00:14:02.387 "transport_tos": 0, 00:14:02.387 "nvme_error_stat": false, 00:14:02.387 "rdma_srq_size": 0, 00:14:02.387 "io_path_stat": false, 00:14:02.387 "allow_accel_sequence": false, 00:14:02.387 "rdma_max_cq_size": 0, 00:14:02.387 "rdma_cm_event_timeout_ms": 0, 00:14:02.387 "dhchap_digests": [ 00:14:02.387 "sha256", 00:14:02.387 "sha384", 00:14:02.387 "sha512" 00:14:02.387 ], 00:14:02.387 "dhchap_dhgroups": [ 00:14:02.387 "null", 00:14:02.387 "ffdhe2048", 00:14:02.387 "ffdhe3072", 00:14:02.387 "ffdhe4096", 00:14:02.387 "ffdhe6144", 00:14:02.387 "ffdhe8192" 00:14:02.387 ] 00:14:02.387 } 00:14:02.387 }, 00:14:02.387 { 00:14:02.387 "method": "bdev_nvme_set_hotplug", 00:14:02.387 "params": { 00:14:02.387 "period_us": 100000, 00:14:02.387 "enable": false 00:14:02.387 } 00:14:02.387 }, 00:14:02.387 { 00:14:02.387 "method": "bdev_malloc_create", 00:14:02.387 "params": { 00:14:02.387 "name": "malloc0", 00:14:02.387 "num_blocks": 8192, 00:14:02.387 "block_size": 4096, 00:14:02.387 "physical_block_size": 4096, 00:14:02.387 "uuid": "7d456553-d167-471a-b41e-283a401eb310", 00:14:02.387 "optimal_io_boundary": 0, 00:14:02.387 "md_size": 0, 00:14:02.387 "dif_type": 0, 00:14:02.387 "dif_is_head_of_md": false, 00:14:02.387 "dif_pi_format": 0 00:14:02.387 } 00:14:02.387 }, 00:14:02.387 { 00:14:02.387 "method": "bdev_wait_for_examine" 00:14:02.387 } 00:14:02.387 ] 00:14:02.387 }, 00:14:02.387 { 00:14:02.387 "subsystem": "nbd", 00:14:02.387 "config": [] 00:14:02.387 }, 00:14:02.387 { 00:14:02.387 "subsystem": "scheduler", 00:14:02.387 "config": [ 00:14:02.387 { 00:14:02.387 "method": "framework_set_scheduler", 00:14:02.387 "params": { 00:14:02.387 "name": "static" 00:14:02.387 } 00:14:02.387 } 00:14:02.387 ] 00:14:02.387 }, 00:14:02.387 { 00:14:02.387 "subsystem": "nvmf", 00:14:02.387 "config": [ 00:14:02.387 { 00:14:02.387 "method": "nvmf_set_config", 00:14:02.387 "params": { 00:14:02.387 "discovery_filter": "match_any", 00:14:02.387 "admin_cmd_passthru": { 00:14:02.387 "identify_ctrlr": false 00:14:02.387 }, 00:14:02.387 "dhchap_digests": [ 00:14:02.387 "sha256", 00:14:02.387 "sha384", 00:14:02.387 "sha512" 00:14:02.387 ], 00:14:02.387 "dhchap_dhgroups": [ 00:14:02.387 "null", 00:14:02.387 "ffdhe2048", 00:14:02.387 "ffdhe3072", 00:14:02.387 "ffdhe4096", 00:14:02.387 "ffdhe6144", 00:14:02.387 "ffdhe8192" 00:14:02.387 ] 00:14:02.387 } 00:14:02.387 }, 00:14:02.387 { 00:14:02.387 "method": "nvmf_set_max_subsystems", 00:14:02.387 "params": { 00:14:02.387 "max_subsystems": 1024 00:14:02.387 } 00:14:02.387 }, 00:14:02.387 { 00:14:02.387 "method": "nvmf_set_crdt", 00:14:02.387 "params": { 00:14:02.387 "crdt1": 0, 00:14:02.387 "crdt2": 0, 00:14:02.387 "crdt3": 0 00:14:02.387 } 00:14:02.387 }, 00:14:02.387 { 00:14:02.387 "method": "nvmf_create_transport", 00:14:02.387 "params": { 00:14:02.387 "trtype": "TCP", 00:14:02.387 "max_queue_depth": 128, 00:14:02.387 "max_io_qpairs_per_ctrlr": 127, 00:14:02.387 "in_capsule_data_size": 4096, 00:14:02.387 "max_io_size": 131072, 00:14:02.387 "io_unit_size": 131072, 00:14:02.387 "max_aq_depth": 128, 00:14:02.387 "num_shared_buffers": 511, 00:14:02.387 "buf_cache_size": 4294967295, 00:14:02.387 "dif_insert_or_strip": false, 00:14:02.387 "zcopy": false, 00:14:02.387 "c2h_success": false, 00:14:02.387 "sock_priority": 0, 00:14:02.387 "abort_timeout_sec": 1, 00:14:02.387 "ack_timeout": 0, 00:14:02.387 "data_wr_pool_size": 0 00:14:02.387 } 00:14:02.387 }, 00:14:02.387 { 00:14:02.387 "method": "nvmf_create_subsystem", 00:14:02.387 "params": { 00:14:02.387 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.387 "allow_any_host": false, 00:14:02.387 "serial_number": "SPDK00000000000001", 00:14:02.387 "model_number": "SPDK bdev Controller", 00:14:02.387 "max_namespaces": 10, 00:14:02.387 "min_cntlid": 1, 00:14:02.387 "max_cntlid": 65519, 00:14:02.387 "ana_reporting": false 00:14:02.387 } 00:14:02.387 }, 00:14:02.387 { 00:14:02.387 "method": "nvmf_subsystem_add_host", 00:14:02.387 "params": { 00:14:02.387 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.387 "host": "nqn.2016-06.io.spdk:host1", 00:14:02.387 "psk": "key0" 00:14:02.387 } 00:14:02.387 }, 00:14:02.387 { 00:14:02.387 "method": "nvmf_subsystem_add_ns", 00:14:02.387 "params": { 00:14:02.387 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.387 "namespace": { 00:14:02.387 "nsid": 1, 00:14:02.387 "bdev_name": "malloc0", 00:14:02.387 "nguid": "7D456553D167471AB41E283A401EB310", 00:14:02.387 "uuid": "7d456553-d167-471a-b41e-283a401eb310", 00:14:02.387 "no_auto_visible": false 00:14:02.387 } 00:14:02.387 } 00:14:02.387 }, 00:14:02.387 { 00:14:02.387 "method": "nvmf_subsystem_add_listener", 00:14:02.387 "params": { 00:14:02.387 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.387 "listen_address": { 00:14:02.388 "trtype": "TCP", 00:14:02.388 "adrfam": "IPv4", 00:14:02.388 "traddr": "10.0.0.3", 00:14:02.388 "trsvcid": "4420" 00:14:02.388 }, 00:14:02.388 "secure_channel": true 00:14:02.388 } 00:14:02.388 } 00:14:02.388 ] 00:14:02.388 } 00:14:02.388 ] 00:14:02.388 }' 00:14:02.388 08:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:02.646 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:14:02.646 "subsystems": [ 00:14:02.646 { 00:14:02.646 "subsystem": "keyring", 00:14:02.646 "config": [ 00:14:02.646 { 00:14:02.646 "method": "keyring_file_add_key", 00:14:02.646 "params": { 00:14:02.646 "name": "key0", 00:14:02.646 "path": "/tmp/tmp.jQCv2Cn94U" 00:14:02.646 } 00:14:02.646 } 00:14:02.646 ] 00:14:02.646 }, 00:14:02.646 { 00:14:02.646 "subsystem": "iobuf", 00:14:02.646 "config": [ 00:14:02.646 { 00:14:02.646 "method": "iobuf_set_options", 00:14:02.646 "params": { 00:14:02.646 "small_pool_count": 8192, 00:14:02.646 "large_pool_count": 1024, 00:14:02.647 "small_bufsize": 8192, 00:14:02.647 "large_bufsize": 135168, 00:14:02.647 "enable_numa": false 00:14:02.647 } 00:14:02.647 } 00:14:02.647 ] 00:14:02.647 }, 00:14:02.647 { 00:14:02.647 "subsystem": "sock", 00:14:02.647 "config": [ 00:14:02.647 { 00:14:02.647 "method": "sock_set_default_impl", 00:14:02.647 "params": { 00:14:02.647 "impl_name": "uring" 00:14:02.647 } 00:14:02.647 }, 00:14:02.647 { 00:14:02.647 "method": "sock_impl_set_options", 00:14:02.647 "params": { 00:14:02.647 "impl_name": "ssl", 00:14:02.647 "recv_buf_size": 4096, 00:14:02.647 "send_buf_size": 4096, 00:14:02.647 "enable_recv_pipe": true, 00:14:02.647 "enable_quickack": false, 00:14:02.647 "enable_placement_id": 0, 00:14:02.647 "enable_zerocopy_send_server": true, 00:14:02.647 "enable_zerocopy_send_client": false, 00:14:02.647 "zerocopy_threshold": 0, 00:14:02.647 "tls_version": 0, 00:14:02.647 "enable_ktls": false 00:14:02.647 } 00:14:02.647 }, 00:14:02.647 { 00:14:02.647 "method": "sock_impl_set_options", 00:14:02.647 "params": { 00:14:02.647 "impl_name": "posix", 00:14:02.647 "recv_buf_size": 2097152, 00:14:02.647 "send_buf_size": 2097152, 00:14:02.647 "enable_recv_pipe": true, 00:14:02.647 "enable_quickack": false, 00:14:02.647 "enable_placement_id": 0, 00:14:02.647 "enable_zerocopy_send_server": true, 00:14:02.647 "enable_zerocopy_send_client": false, 00:14:02.647 "zerocopy_threshold": 0, 00:14:02.647 "tls_version": 0, 00:14:02.647 "enable_ktls": false 00:14:02.647 } 00:14:02.647 }, 00:14:02.647 { 00:14:02.647 "method": "sock_impl_set_options", 00:14:02.647 "params": { 00:14:02.647 "impl_name": "uring", 00:14:02.647 "recv_buf_size": 2097152, 00:14:02.647 "send_buf_size": 2097152, 00:14:02.647 "enable_recv_pipe": true, 00:14:02.647 "enable_quickack": false, 00:14:02.647 "enable_placement_id": 0, 00:14:02.647 "enable_zerocopy_send_server": false, 00:14:02.647 "enable_zerocopy_send_client": false, 00:14:02.647 "zerocopy_threshold": 0, 00:14:02.647 "tls_version": 0, 00:14:02.647 "enable_ktls": false 00:14:02.647 } 00:14:02.647 } 00:14:02.647 ] 00:14:02.647 }, 00:14:02.647 { 00:14:02.647 "subsystem": "vmd", 00:14:02.647 "config": [] 00:14:02.647 }, 00:14:02.647 { 00:14:02.647 "subsystem": "accel", 00:14:02.647 "config": [ 00:14:02.647 { 00:14:02.647 "method": "accel_set_options", 00:14:02.647 "params": { 00:14:02.647 "small_cache_size": 128, 00:14:02.647 "large_cache_size": 16, 00:14:02.647 "task_count": 2048, 00:14:02.647 "sequence_count": 2048, 00:14:02.647 "buf_count": 2048 00:14:02.647 } 00:14:02.647 } 00:14:02.647 ] 00:14:02.647 }, 00:14:02.647 { 00:14:02.647 "subsystem": "bdev", 00:14:02.647 "config": [ 00:14:02.647 { 00:14:02.647 "method": "bdev_set_options", 00:14:02.647 "params": { 00:14:02.647 "bdev_io_pool_size": 65535, 00:14:02.647 "bdev_io_cache_size": 256, 00:14:02.647 "bdev_auto_examine": true, 00:14:02.647 "iobuf_small_cache_size": 128, 00:14:02.647 "iobuf_large_cache_size": 16 00:14:02.647 } 00:14:02.647 }, 00:14:02.647 { 00:14:02.647 "method": "bdev_raid_set_options", 00:14:02.647 "params": { 00:14:02.647 "process_window_size_kb": 1024, 00:14:02.647 "process_max_bandwidth_mb_sec": 0 00:14:02.647 } 00:14:02.647 }, 00:14:02.647 { 00:14:02.647 "method": "bdev_iscsi_set_options", 00:14:02.647 "params": { 00:14:02.647 "timeout_sec": 30 00:14:02.647 } 00:14:02.647 }, 00:14:02.647 { 00:14:02.647 "method": "bdev_nvme_set_options", 00:14:02.647 "params": { 00:14:02.647 "action_on_timeout": "none", 00:14:02.647 "timeout_us": 0, 00:14:02.647 "timeout_admin_us": 0, 00:14:02.647 "keep_alive_timeout_ms": 10000, 00:14:02.647 "arbitration_burst": 0, 00:14:02.647 "low_priority_weight": 0, 00:14:02.647 "medium_priority_weight": 0, 00:14:02.647 "high_priority_weight": 0, 00:14:02.647 "nvme_adminq_poll_period_us": 10000, 00:14:02.647 "nvme_ioq_poll_period_us": 0, 00:14:02.647 "io_queue_requests": 512, 00:14:02.647 "delay_cmd_submit": true, 00:14:02.647 "transport_retry_count": 4, 00:14:02.647 "bdev_retry_count": 3, 00:14:02.647 "transport_ack_timeout": 0, 00:14:02.647 "ctrlr_loss_timeout_sec": 0, 00:14:02.647 "reconnect_delay_sec": 0, 00:14:02.647 "fast_io_fail_timeout_sec": 0, 00:14:02.647 "disable_auto_failback": false, 00:14:02.647 "generate_uuids": false, 00:14:02.647 "transport_tos": 0, 00:14:02.647 "nvme_error_stat": false, 00:14:02.647 "rdma_srq_size": 0, 00:14:02.647 "io_path_stat": false, 00:14:02.647 "allow_accel_sequence": false, 00:14:02.647 "rdma_max_cq_size": 0, 00:14:02.647 "rdma_cm_event_timeout_ms": 0, 00:14:02.647 "dhchap_digests": [ 00:14:02.647 "sha256", 00:14:02.647 "sha384", 00:14:02.647 "sha512" 00:14:02.647 ], 00:14:02.647 "dhchap_dhgroups": [ 00:14:02.647 "null", 00:14:02.647 "ffdhe2048", 00:14:02.647 "ffdhe3072", 00:14:02.647 "ffdhe4096", 00:14:02.647 "ffdhe6144", 00:14:02.647 "ffdhe8192" 00:14:02.647 ] 00:14:02.647 } 00:14:02.647 }, 00:14:02.647 { 00:14:02.647 "method": "bdev_nvme_attach_controller", 00:14:02.647 "params": { 00:14:02.647 "name": "TLSTEST", 00:14:02.647 "trtype": "TCP", 00:14:02.647 "adrfam": "IPv4", 00:14:02.647 "traddr": "10.0.0.3", 00:14:02.647 "trsvcid": "4420", 00:14:02.647 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.647 "prchk_reftag": false, 00:14:02.647 "prchk_guard": false, 00:14:02.647 "ctrlr_loss_timeout_sec": 0, 00:14:02.647 "reconnect_delay_sec": 0, 00:14:02.647 "fast_io_fail_timeout_sec": 0, 00:14:02.647 "psk": "key0", 00:14:02.647 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:02.647 "hdgst": false, 00:14:02.647 "ddgst": false, 00:14:02.647 "multipath": "multipath" 00:14:02.647 } 00:14:02.647 }, 00:14:02.647 { 00:14:02.647 "method": "bdev_nvme_set_hotplug", 00:14:02.647 "params": { 00:14:02.647 "period_us": 100000, 00:14:02.647 "enable": false 00:14:02.647 } 00:14:02.647 }, 00:14:02.647 { 00:14:02.647 "method": "bdev_wait_for_examine" 00:14:02.647 } 00:14:02.647 ] 00:14:02.647 }, 00:14:02.647 { 00:14:02.647 "subsystem": "nbd", 00:14:02.647 "config": [] 00:14:02.647 } 00:14:02.647 ] 00:14:02.647 }' 00:14:02.647 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72043 00:14:02.647 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' -z 72043 ']' 00:14:02.647 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # kill -0 72043 00:14:02.647 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # uname 00:14:02.647 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:14:02.647 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 72043 00:14:02.647 killing process with pid 72043 00:14:02.647 Received shutdown signal, test time was about 10.000000 seconds 00:14:02.647 00:14:02.647 Latency(us) 00:14:02.647 [2024-11-20T08:26:50.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.647 [2024-11-20T08:26:50.208Z] =================================================================================================================== 00:14:02.647 [2024-11-20T08:26:50.208Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:02.647 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # process_name=reactor_2 00:14:02.647 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # '[' reactor_2 = sudo ']' 00:14:02.647 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # echo 'killing process with pid 72043' 00:14:02.647 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # kill 72043 00:14:02.647 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@981 -- # wait 72043 00:14:02.906 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 71987 00:14:02.906 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' -z 71987 ']' 00:14:02.906 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # kill -0 71987 00:14:02.906 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # uname 00:14:02.906 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:14:02.906 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 71987 00:14:02.906 killing process with pid 71987 00:14:02.906 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:14:02.906 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:14:02.906 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # echo 'killing process with pid 71987' 00:14:02.906 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # kill 71987 00:14:02.906 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@981 -- # wait 71987 00:14:03.166 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:03.166 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:03.166 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:03.166 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:03.166 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:14:03.166 "subsystems": [ 00:14:03.166 { 00:14:03.166 "subsystem": "keyring", 00:14:03.166 "config": [ 00:14:03.166 { 00:14:03.166 "method": "keyring_file_add_key", 00:14:03.166 "params": { 00:14:03.166 "name": "key0", 00:14:03.166 "path": "/tmp/tmp.jQCv2Cn94U" 00:14:03.166 } 00:14:03.166 } 00:14:03.166 ] 00:14:03.166 }, 00:14:03.166 { 00:14:03.166 "subsystem": "iobuf", 00:14:03.166 "config": [ 00:14:03.166 { 00:14:03.166 "method": "iobuf_set_options", 00:14:03.166 "params": { 00:14:03.166 "small_pool_count": 8192, 00:14:03.166 "large_pool_count": 1024, 00:14:03.166 "small_bufsize": 8192, 00:14:03.166 "large_bufsize": 135168, 00:14:03.166 "enable_numa": false 00:14:03.166 } 00:14:03.166 } 00:14:03.166 ] 00:14:03.166 }, 00:14:03.166 { 00:14:03.166 "subsystem": "sock", 00:14:03.166 "config": [ 00:14:03.166 { 00:14:03.166 "method": "sock_set_default_impl", 00:14:03.166 "params": { 00:14:03.166 "impl_name": "uring" 00:14:03.166 } 00:14:03.166 }, 00:14:03.166 { 00:14:03.166 "method": "sock_impl_set_options", 00:14:03.166 "params": { 00:14:03.166 "impl_name": "ssl", 00:14:03.166 "recv_buf_size": 4096, 00:14:03.166 "send_buf_size": 4096, 00:14:03.166 "enable_recv_pipe": true, 00:14:03.166 "enable_quickack": false, 00:14:03.166 "enable_placement_id": 0, 00:14:03.166 "enable_zerocopy_send_server": true, 00:14:03.166 "enable_zerocopy_send_client": false, 00:14:03.166 "zerocopy_threshold": 0, 00:14:03.166 "tls_version": 0, 00:14:03.166 "enable_ktls": false 00:14:03.166 } 00:14:03.166 }, 00:14:03.166 { 00:14:03.166 "method": "sock_impl_set_options", 00:14:03.166 "params": { 00:14:03.166 "impl_name": "posix", 00:14:03.166 "recv_buf_size": 2097152, 00:14:03.166 "send_buf_size": 2097152, 00:14:03.166 "enable_recv_pipe": true, 00:14:03.166 "enable_quickack": false, 00:14:03.166 "enable_placement_id": 0, 00:14:03.166 "enable_zerocopy_send_server": true, 00:14:03.166 "enable_zerocopy_send_client": false, 00:14:03.166 "zerocopy_threshold": 0, 00:14:03.166 "tls_version": 0, 00:14:03.166 "enable_ktls": false 00:14:03.166 } 00:14:03.166 }, 00:14:03.166 { 00:14:03.166 "method": "sock_impl_set_options", 00:14:03.166 "params": { 00:14:03.166 "impl_name": "uring", 00:14:03.166 "recv_buf_size": 2097152, 00:14:03.166 "send_buf_size": 2097152, 00:14:03.166 "enable_recv_pipe": true, 00:14:03.166 "enable_quickack": false, 00:14:03.166 "enable_placement_id": 0, 00:14:03.166 "enable_zerocopy_send_server": false, 00:14:03.166 "enable_zerocopy_send_client": false, 00:14:03.166 "zerocopy_threshold": 0, 00:14:03.166 "tls_version": 0, 00:14:03.166 "enable_ktls": false 00:14:03.166 } 00:14:03.166 } 00:14:03.166 ] 00:14:03.166 }, 00:14:03.166 { 00:14:03.166 "subsystem": "vmd", 00:14:03.166 "config": [] 00:14:03.166 }, 00:14:03.166 { 00:14:03.166 "subsystem": "accel", 00:14:03.166 "config": [ 00:14:03.166 { 00:14:03.166 "method": "accel_set_options", 00:14:03.166 "params": { 00:14:03.166 "small_cache_size": 128, 00:14:03.166 "large_cache_size": 16, 00:14:03.166 "task_count": 2048, 00:14:03.166 "sequence_count": 2048, 00:14:03.166 "buf_count": 2048 00:14:03.166 } 00:14:03.166 } 00:14:03.166 ] 00:14:03.166 }, 00:14:03.166 { 00:14:03.167 "subsystem": "bdev", 00:14:03.167 "config": [ 00:14:03.167 { 00:14:03.167 "method": "bdev_set_options", 00:14:03.167 "params": { 00:14:03.167 "bdev_io_pool_size": 65535, 00:14:03.167 "bdev_io_cache_size": 256, 00:14:03.167 "bdev_auto_examine": true, 00:14:03.167 "iobuf_small_cache_size": 128, 00:14:03.167 "iobuf_large_cache_size": 16 00:14:03.167 } 00:14:03.167 }, 00:14:03.167 { 00:14:03.167 "method": "bdev_raid_set_options", 00:14:03.167 "params": { 00:14:03.167 "process_window_size_kb": 1024, 00:14:03.167 "process_max_bandwidth_mb_sec": 0 00:14:03.167 } 00:14:03.167 }, 00:14:03.167 { 00:14:03.167 "method": "bdev_iscsi_set_options", 00:14:03.167 "params": { 00:14:03.167 "timeout_sec": 30 00:14:03.167 } 00:14:03.167 }, 00:14:03.167 { 00:14:03.167 "method": "bdev_nvme_set_options", 00:14:03.167 "params": { 00:14:03.167 "action_on_timeout": "none", 00:14:03.167 "timeout_us": 0, 00:14:03.167 "timeout_admin_us": 0, 00:14:03.167 "keep_alive_timeout_ms": 10000, 00:14:03.167 "arbitration_burst": 0, 00:14:03.167 "low_priority_weight": 0, 00:14:03.167 "medium_priority_weight": 0, 00:14:03.167 "high_priority_weight": 0, 00:14:03.167 "nvme_adminq_poll_period_us": 10000, 00:14:03.167 "nvme_ioq_poll_period_us": 0, 00:14:03.167 "io_queue_requests": 0, 00:14:03.167 "delay_cmd_submit": true, 00:14:03.167 "transport_retry_count": 4, 00:14:03.167 "bdev_retry_count": 3, 00:14:03.167 "transport_ack_timeout": 0, 00:14:03.167 "ctrlr_loss_timeout_sec": 0, 00:14:03.167 "reconnect_delay_sec": 0, 00:14:03.167 "fast_io_fail_timeout_sec": 0, 00:14:03.167 "disable_auto_failback": false, 00:14:03.167 "generate_uuids": false, 00:14:03.167 "transport_tos": 0, 00:14:03.167 "nvme_error_stat": false, 00:14:03.167 "rdma_srq_size": 0, 00:14:03.167 "io_path_stat": false, 00:14:03.167 "allow_accel_sequence": false, 00:14:03.167 "rdma_max_cq_size": 0, 00:14:03.167 "rdma_cm_event_timeout_ms": 0, 00:14:03.167 "dhchap_digests": [ 00:14:03.167 "sha256", 00:14:03.167 "sha384", 00:14:03.167 "sha512" 00:14:03.167 ], 00:14:03.167 "dhchap_dhgroups": [ 00:14:03.167 "null", 00:14:03.167 "ffdhe2048", 00:14:03.167 "ffdhe3072", 00:14:03.167 "ffdhe4096", 00:14:03.167 "ffdhe6144", 00:14:03.167 "ffdhe8192" 00:14:03.167 ] 00:14:03.167 } 00:14:03.167 }, 00:14:03.167 { 00:14:03.167 "method": "bdev_nvme_set_hotplug", 00:14:03.167 "params": { 00:14:03.167 "period_us": 100000, 00:14:03.167 "enable": false 00:14:03.167 } 00:14:03.167 }, 00:14:03.167 { 00:14:03.167 "method": "bdev_malloc_create", 00:14:03.167 "params": { 00:14:03.167 "name": "malloc0", 00:14:03.167 "num_blocks": 8192, 00:14:03.167 "block_size": 4096, 00:14:03.167 "physical_block_size": 4096, 00:14:03.167 "uuid": "7d456553-d167-471a-b41e-283a401eb310", 00:14:03.167 "optimal_io_boundary": 0, 00:14:03.167 "md_size": 0, 00:14:03.167 "dif_type": 0, 00:14:03.167 "dif_is_head_of_md": false, 00:14:03.167 "dif_pi_format": 0 00:14:03.167 } 00:14:03.167 }, 00:14:03.167 { 00:14:03.167 "method": "bdev_wait_for_examine" 00:14:03.167 } 00:14:03.167 ] 00:14:03.167 }, 00:14:03.167 { 00:14:03.167 "subsystem": "nbd", 00:14:03.167 "config": [] 00:14:03.167 }, 00:14:03.167 { 00:14:03.167 "subsystem": "scheduler", 00:14:03.167 "config": [ 00:14:03.167 { 00:14:03.167 "method": "framework_set_scheduler", 00:14:03.167 "params": { 00:14:03.167 "name": "static" 00:14:03.167 } 00:14:03.167 } 00:14:03.167 ] 00:14:03.167 }, 00:14:03.167 { 00:14:03.167 "subsystem": "nvmf", 00:14:03.167 "config": [ 00:14:03.167 { 00:14:03.167 "method": "nvmf_set_config", 00:14:03.167 "params": { 00:14:03.167 "discovery_filter": "match_any", 00:14:03.167 "admin_cmd_passthru": { 00:14:03.167 "identify_ctrlr": false 00:14:03.167 }, 00:14:03.167 "dhchap_digests": [ 00:14:03.167 "sha256", 00:14:03.167 "sha384", 00:14:03.167 "sha512" 00:14:03.167 ], 00:14:03.167 "dhchap_dhgroups": [ 00:14:03.167 "null", 00:14:03.167 "ffdhe2048", 00:14:03.167 "ffdhe3072", 00:14:03.167 "ffdhe4096", 00:14:03.167 "ffdhe6144", 00:14:03.167 "ffdhe8192" 00:14:03.167 ] 00:14:03.167 } 00:14:03.167 }, 00:14:03.167 { 00:14:03.167 "method": "nvmf_set_max_subsystems", 00:14:03.167 "params": { 00:14:03.167 "max_subsystems": 1024 00:14:03.167 } 00:14:03.167 }, 00:14:03.167 { 00:14:03.167 "method": "nvmf_set_crdt", 00:14:03.167 "params": { 00:14:03.167 "crdt1": 0, 00:14:03.167 "crdt2": 0, 00:14:03.167 "crdt3": 0 00:14:03.167 } 00:14:03.167 }, 00:14:03.167 { 00:14:03.167 "method": "nvmf_create_transport", 00:14:03.167 "params": { 00:14:03.167 "trtype": "TCP", 00:14:03.167 "max_queue_depth": 128, 00:14:03.167 "max_io_qpairs_per_ctrlr": 127, 00:14:03.167 "in_capsule_data_size": 4096, 00:14:03.167 "max_io_size": 131072, 00:14:03.167 "io_unit_size": 131072, 00:14:03.167 "max_aq_depth": 128, 00:14:03.167 "num_shared_buffers": 511, 00:14:03.167 "buf_cache_size": 4294967295, 00:14:03.167 "dif_insert_or_strip": false, 00:14:03.167 "zcopy": false, 00:14:03.167 "c2h_success": false, 00:14:03.167 "sock_priority": 0, 00:14:03.167 "abort_timeout_sec": 1, 00:14:03.167 "ack_timeout": 0, 00:14:03.167 "data_wr_pool_size": 0 00:14:03.167 } 00:14:03.167 }, 00:14:03.167 { 00:14:03.167 "method": "nvmf_create_subsystem", 00:14:03.167 "params": { 00:14:03.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:03.167 "allow_any_host": false, 00:14:03.167 "serial_number": "SPDK00000000000001", 00:14:03.167 "model_number": "SPDK bdev Controller", 00:14:03.167 "max_namespaces": 10, 00:14:03.167 "min_cntlid": 1, 00:14:03.167 "max_cntlid": 65519, 00:14:03.167 "ana_reporting": false 00:14:03.167 } 00:14:03.167 }, 00:14:03.167 { 00:14:03.167 "method": "nvmf_subsystem_add_host", 00:14:03.167 "params": { 00:14:03.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:03.167 "host": "nqn.2016-06.io.spdk:host1", 00:14:03.167 "psk": "key0" 00:14:03.167 } 00:14:03.167 }, 00:14:03.167 { 00:14:03.167 "method": "nvmf_subsystem_add_ns", 00:14:03.167 "params": { 00:14:03.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:03.167 "namespace": { 00:14:03.167 "nsid": 1, 00:14:03.167 "bdev_name": "malloc0", 00:14:03.167 "nguid": "7D456553D167471AB41E283A401EB310", 00:14:03.167 "uuid": "7d456553-d167-471a-b41e-283a401eb310", 00:14:03.167 "no_auto_visible": false 00:14:03.167 } 00:14:03.167 } 00:14:03.167 }, 00:14:03.167 { 00:14:03.167 "method": "nvmf_subsystem_add_listener", 00:14:03.167 "params": { 00:14:03.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:03.167 "listen_address": { 00:14:03.167 "trtype": "TCP", 00:14:03.167 "adrfam": "IPv4", 00:14:03.167 "traddr": "10.0.0.3", 00:14:03.167 "trsvcid": "4420" 00:14:03.167 }, 00:14:03.167 "secure_channel": true 00:14:03.167 } 00:14:03.167 } 00:14:03.167 ] 00:14:03.167 } 00:14:03.167 ] 00:14:03.167 }' 00:14:03.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.167 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72092 00:14:03.167 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72092 00:14:03.167 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:03.167 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # '[' -z 72092 ']' 00:14:03.168 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.168 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # local max_retries=100 00:14:03.168 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.168 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@847 -- # xtrace_disable 00:14:03.168 08:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:03.427 [2024-11-20 08:26:50.772051] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:14:03.427 [2024-11-20 08:26:50.772358] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.427 [2024-11-20 08:26:50.916026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.427 [2024-11-20 08:26:50.982429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.427 [2024-11-20 08:26:50.982767] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.427 [2024-11-20 08:26:50.982966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.427 [2024-11-20 08:26:50.983231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.427 [2024-11-20 08:26:50.983243] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.427 [2024-11-20 08:26:50.983804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.685 [2024-11-20 08:26:51.171546] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:03.943 [2024-11-20 08:26:51.265876] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.943 [2024-11-20 08:26:51.297803] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:03.943 [2024-11-20 08:26:51.298089] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:04.510 08:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:14:04.510 08:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@871 -- # return 0 00:14:04.510 08:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:04.510 08:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@735 -- # xtrace_disable 00:14:04.510 08:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:04.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:04.510 08:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.510 08:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72130 00:14:04.510 08:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72130 /var/tmp/bdevperf.sock 00:14:04.510 08:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # '[' -z 72130 ']' 00:14:04.510 08:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:04.510 08:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # local max_retries=100 00:14:04.510 08:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:04.510 08:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@847 -- # xtrace_disable 00:14:04.510 08:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:04.510 08:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:04.510 08:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:14:04.510 "subsystems": [ 00:14:04.510 { 00:14:04.510 "subsystem": "keyring", 00:14:04.510 "config": [ 00:14:04.510 { 00:14:04.510 "method": "keyring_file_add_key", 00:14:04.510 "params": { 00:14:04.510 "name": "key0", 00:14:04.510 "path": "/tmp/tmp.jQCv2Cn94U" 00:14:04.510 } 00:14:04.510 } 00:14:04.510 ] 00:14:04.510 }, 00:14:04.510 { 00:14:04.510 "subsystem": "iobuf", 00:14:04.510 "config": [ 00:14:04.510 { 00:14:04.510 "method": "iobuf_set_options", 00:14:04.510 "params": { 00:14:04.510 "small_pool_count": 8192, 00:14:04.510 "large_pool_count": 1024, 00:14:04.510 "small_bufsize": 8192, 00:14:04.510 "large_bufsize": 135168, 00:14:04.510 "enable_numa": false 00:14:04.510 } 00:14:04.510 } 00:14:04.510 ] 00:14:04.510 }, 00:14:04.510 { 00:14:04.510 "subsystem": "sock", 00:14:04.510 "config": [ 00:14:04.510 { 00:14:04.510 "method": "sock_set_default_impl", 00:14:04.510 "params": { 00:14:04.510 "impl_name": "uring" 00:14:04.510 } 00:14:04.510 }, 00:14:04.510 { 00:14:04.510 "method": "sock_impl_set_options", 00:14:04.510 "params": { 00:14:04.510 "impl_name": "ssl", 00:14:04.510 "recv_buf_size": 4096, 00:14:04.510 "send_buf_size": 4096, 00:14:04.510 "enable_recv_pipe": true, 00:14:04.511 "enable_quickack": false, 00:14:04.511 "enable_placement_id": 0, 00:14:04.511 "enable_zerocopy_send_server": true, 00:14:04.511 "enable_zerocopy_send_client": false, 00:14:04.511 "zerocopy_threshold": 0, 00:14:04.511 "tls_version": 0, 00:14:04.511 "enable_ktls": false 00:14:04.511 } 00:14:04.511 }, 00:14:04.511 { 00:14:04.511 "method": "sock_impl_set_options", 00:14:04.511 "params": { 00:14:04.511 "impl_name": "posix", 00:14:04.511 "recv_buf_size": 2097152, 00:14:04.511 "send_buf_size": 2097152, 00:14:04.511 "enable_recv_pipe": true, 00:14:04.511 "enable_quickack": false, 00:14:04.511 "enable_placement_id": 0, 00:14:04.511 "enable_zerocopy_send_server": true, 00:14:04.511 "enable_zerocopy_send_client": false, 00:14:04.511 "zerocopy_threshold": 0, 00:14:04.511 "tls_version": 0, 00:14:04.511 "enable_ktls": false 00:14:04.511 } 00:14:04.511 }, 00:14:04.511 { 00:14:04.511 "method": "sock_impl_set_options", 00:14:04.511 "params": { 00:14:04.511 "impl_name": "uring", 00:14:04.511 "recv_buf_size": 2097152, 00:14:04.511 "send_buf_size": 2097152, 00:14:04.511 "enable_recv_pipe": true, 00:14:04.511 "enable_quickack": false, 00:14:04.511 "enable_placement_id": 0, 00:14:04.511 "enable_zerocopy_send_server": false, 00:14:04.511 "enable_zerocopy_send_client": false, 00:14:04.511 "zerocopy_threshold": 0, 00:14:04.511 "tls_version": 0, 00:14:04.511 "enable_ktls": false 00:14:04.511 } 00:14:04.511 } 00:14:04.511 ] 00:14:04.511 }, 00:14:04.511 { 00:14:04.511 "subsystem": "vmd", 00:14:04.511 "config": [] 00:14:04.511 }, 00:14:04.511 { 00:14:04.511 "subsystem": "accel", 00:14:04.511 "config": [ 00:14:04.511 { 00:14:04.511 "method": "accel_set_options", 00:14:04.511 "params": { 00:14:04.511 "small_cache_size": 128, 00:14:04.511 "large_cache_size": 16, 00:14:04.511 "task_count": 2048, 00:14:04.511 "sequence_count": 2048, 00:14:04.511 "buf_count": 2048 00:14:04.511 } 00:14:04.511 } 00:14:04.511 ] 00:14:04.511 }, 00:14:04.511 { 00:14:04.511 "subsystem": "bdev", 00:14:04.511 "config": [ 00:14:04.511 { 00:14:04.511 "method": "bdev_set_options", 00:14:04.511 "params": { 00:14:04.511 "bdev_io_pool_size": 65535, 00:14:04.511 "bdev_io_cache_size": 256, 00:14:04.511 "bdev_auto_examine": true, 00:14:04.511 "iobuf_small_cache_size": 128, 00:14:04.511 "iobuf_large_cache_size": 16 00:14:04.511 } 00:14:04.511 }, 00:14:04.511 { 00:14:04.511 "method": "bdev_raid_set_options", 00:14:04.511 "params": { 00:14:04.511 "process_window_size_kb": 1024, 00:14:04.511 "process_max_bandwidth_mb_sec": 0 00:14:04.511 } 00:14:04.511 }, 00:14:04.511 { 00:14:04.511 "method": "bdev_iscsi_set_options", 00:14:04.511 "params": { 00:14:04.511 "timeout_sec": 30 00:14:04.511 } 00:14:04.511 }, 00:14:04.511 { 00:14:04.511 "method": "bdev_nvme_set_options", 00:14:04.511 "params": { 00:14:04.511 "action_on_timeout": "none", 00:14:04.511 "timeout_us": 0, 00:14:04.511 "timeout_admin_us": 0, 00:14:04.511 "keep_alive_timeout_ms": 10000, 00:14:04.511 "arbitration_burst": 0, 00:14:04.511 "low_priority_weight": 0, 00:14:04.511 "medium_priority_weight": 0, 00:14:04.511 "high_priority_weight": 0, 00:14:04.511 "nvme_adminq_poll_period_us": 10000, 00:14:04.511 "nvme_ioq_poll_period_us": 0, 00:14:04.511 "io_queue_requests": 512, 00:14:04.511 "delay_cmd_submit": true, 00:14:04.511 "transport_retry_count": 4, 00:14:04.511 "bdev_retry_count": 3, 00:14:04.511 "transport_ack_timeout": 0, 00:14:04.511 "ctrlr_loss_timeout_sec": 0, 00:14:04.511 "reconnect_delay_sec": 0, 00:14:04.511 "fast_io_fail_timeout_sec": 0, 00:14:04.511 "disable_auto_failback": false, 00:14:04.511 "generate_uuids": false, 00:14:04.511 "transport_tos": 0, 00:14:04.511 "nvme_error_stat": false, 00:14:04.511 "rdma_srq_size": 0, 00:14:04.511 "io_path_stat": false, 00:14:04.511 "allow_accel_sequence": false, 00:14:04.511 "rdma_max_cq_size": 0, 00:14:04.511 "rdma_cm_event_timeout_ms": 0, 00:14:04.511 "dhchap_digests": [ 00:14:04.511 "sha256", 00:14:04.511 "sha384", 00:14:04.511 "sha512" 00:14:04.511 ], 00:14:04.511 "dhchap_dhgroups": [ 00:14:04.511 "null", 00:14:04.511 "ffdhe2048", 00:14:04.511 "ffdhe3072", 00:14:04.511 "ffdhe4096", 00:14:04.511 "ffdhe6144", 00:14:04.511 "ffdhe8192" 00:14:04.511 ] 00:14:04.511 } 00:14:04.511 }, 00:14:04.511 { 00:14:04.511 "method": "bdev_nvme_attach_controller", 00:14:04.511 "params": { 00:14:04.511 "name": "TLSTEST", 00:14:04.511 "trtype": "TCP", 00:14:04.511 "adrfam": "IPv4", 00:14:04.511 "traddr": "10.0.0.3", 00:14:04.511 "trsvcid": "4420", 00:14:04.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.511 "prchk_reftag": false, 00:14:04.511 "prchk_guard": false, 00:14:04.511 "ctrlr_loss_timeout_sec": 0, 00:14:04.511 "reconnect_delay_sec": 0, 00:14:04.511 "fast_io_fail_timeout_sec": 0, 00:14:04.511 "psk": "key0", 00:14:04.511 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:04.511 "hdgst": false, 00:14:04.511 "ddgst": false, 00:14:04.511 "multipath": "multipath" 00:14:04.511 } 00:14:04.511 }, 00:14:04.511 { 00:14:04.511 "method": "bdev_nvme_set_hotplug", 00:14:04.511 "params": { 00:14:04.511 "period_us": 100000, 00:14:04.511 "enable": false 00:14:04.511 } 00:14:04.511 }, 00:14:04.511 { 00:14:04.511 "method": "bdev_wait_for_examine" 00:14:04.511 } 00:14:04.511 ] 00:14:04.511 }, 00:14:04.511 { 00:14:04.511 "subsystem": "nbd", 00:14:04.511 "config": [] 00:14:04.511 } 00:14:04.511 ] 00:14:04.511 }' 00:14:04.511 [2024-11-20 08:26:51.932137] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:14:04.511 [2024-11-20 08:26:51.932285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72130 ] 00:14:04.771 [2024-11-20 08:26:52.094985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.771 [2024-11-20 08:26:52.154056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:04.771 [2024-11-20 08:26:52.295976] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:05.030 [2024-11-20 08:26:52.345907] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:05.598 08:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:14:05.598 08:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@871 -- # return 0 00:14:05.598 08:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:05.598 Running I/O for 10 seconds... 00:14:07.469 3850.00 IOPS, 15.04 MiB/s [2024-11-20T08:26:56.407Z] 3954.50 IOPS, 15.45 MiB/s [2024-11-20T08:26:57.356Z] 3963.00 IOPS, 15.48 MiB/s [2024-11-20T08:26:58.293Z] 3965.00 IOPS, 15.49 MiB/s [2024-11-20T08:26:59.230Z] 3976.80 IOPS, 15.53 MiB/s [2024-11-20T08:27:00.164Z] 4001.00 IOPS, 15.63 MiB/s [2024-11-20T08:27:01.100Z] 4006.86 IOPS, 15.65 MiB/s [2024-11-20T08:27:02.034Z] 4012.25 IOPS, 15.67 MiB/s [2024-11-20T08:27:03.412Z] 4009.67 IOPS, 15.66 MiB/s [2024-11-20T08:27:03.412Z] 4012.00 IOPS, 15.67 MiB/s 00:14:15.851 Latency(us) 00:14:15.851 [2024-11-20T08:27:03.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.851 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:15.851 Verification LBA range: start 0x0 length 0x2000 00:14:15.851 TLSTESTn1 : 10.02 4017.73 15.69 0.00 0.00 31800.86 6374.87 23831.27 00:14:15.851 [2024-11-20T08:27:03.412Z] =================================================================================================================== 00:14:15.851 [2024-11-20T08:27:03.412Z] Total : 4017.73 15.69 0.00 0.00 31800.86 6374.87 23831.27 00:14:15.851 { 00:14:15.851 "results": [ 00:14:15.851 { 00:14:15.851 "job": "TLSTESTn1", 00:14:15.851 "core_mask": "0x4", 00:14:15.851 "workload": "verify", 00:14:15.851 "status": "finished", 00:14:15.851 "verify_range": { 00:14:15.851 "start": 0, 00:14:15.851 "length": 8192 00:14:15.851 }, 00:14:15.851 "queue_depth": 128, 00:14:15.851 "io_size": 4096, 00:14:15.851 "runtime": 10.01709, 00:14:15.851 "iops": 4017.7336931184605, 00:14:15.851 "mibps": 15.694272238743986, 00:14:15.852 "io_failed": 0, 00:14:15.852 "io_timeout": 0, 00:14:15.852 "avg_latency_us": 31800.860813632524, 00:14:15.852 "min_latency_us": 6374.865454545455, 00:14:15.852 "max_latency_us": 23831.272727272728 00:14:15.852 } 00:14:15.852 ], 00:14:15.852 "core_count": 1 00:14:15.852 } 00:14:15.852 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:15.852 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72130 00:14:15.852 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' -z 72130 ']' 00:14:15.852 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # kill -0 72130 00:14:15.852 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # uname 00:14:15.852 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:14:15.852 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 72130 00:14:15.852 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # process_name=reactor_2 00:14:15.852 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # '[' reactor_2 = sudo ']' 00:14:15.852 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # echo 'killing process with pid 72130' 00:14:15.852 killing process with pid 72130 00:14:15.852 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # kill 72130 00:14:15.852 Received shutdown signal, test time was about 10.000000 seconds 00:14:15.852 00:14:15.852 Latency(us) 00:14:15.852 [2024-11-20T08:27:03.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.852 [2024-11-20T08:27:03.413Z] =================================================================================================================== 00:14:15.852 [2024-11-20T08:27:03.413Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:15.852 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@981 -- # wait 72130 00:14:15.852 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72092 00:14:15.852 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' -z 72092 ']' 00:14:15.852 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # kill -0 72092 00:14:15.852 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # uname 00:14:15.852 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:14:15.852 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 72092 00:14:15.852 killing process with pid 72092 00:14:15.852 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:14:15.852 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:14:15.852 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # echo 'killing process with pid 72092' 00:14:15.852 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # kill 72092 00:14:15.852 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@981 -- # wait 72092 00:14:16.110 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:14:16.110 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:16.110 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:16.110 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:16.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.110 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72263 00:14:16.110 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:16.110 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72263 00:14:16.110 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # '[' -z 72263 ']' 00:14:16.110 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.110 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # local max_retries=100 00:14:16.110 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.110 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@847 -- # xtrace_disable 00:14:16.110 08:27:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:16.110 [2024-11-20 08:27:03.634402] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:14:16.110 [2024-11-20 08:27:03.634724] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.368 [2024-11-20 08:27:03.788866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.368 [2024-11-20 08:27:03.856156] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.368 [2024-11-20 08:27:03.856471] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.368 [2024-11-20 08:27:03.856521] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.368 [2024-11-20 08:27:03.856544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.368 [2024-11-20 08:27:03.856558] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.368 [2024-11-20 08:27:03.857076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.368 [2024-11-20 08:27:03.914364] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:17.302 08:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:14:17.302 08:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@871 -- # return 0 00:14:17.302 08:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:17.302 08:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@735 -- # xtrace_disable 00:14:17.302 08:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.302 08:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.302 08:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.jQCv2Cn94U 00:14:17.302 08:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.jQCv2Cn94U 00:14:17.302 08:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:17.561 [2024-11-20 08:27:04.910023] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.561 08:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:17.820 08:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:18.078 [2024-11-20 08:27:05.502147] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:18.078 [2024-11-20 08:27:05.502429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:18.078 08:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:18.336 malloc0 00:14:18.336 08:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:18.594 08:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.jQCv2Cn94U 00:14:18.852 08:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:19.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:19.111 08:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72324 00:14:19.111 08:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:19.111 08:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:19.111 08:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72324 /var/tmp/bdevperf.sock 00:14:19.111 08:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # '[' -z 72324 ']' 00:14:19.111 08:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:19.111 08:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # local max_retries=100 00:14:19.111 08:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:19.111 08:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@847 -- # xtrace_disable 00:14:19.111 08:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:19.111 [2024-11-20 08:27:06.578433] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:14:19.111 [2024-11-20 08:27:06.578698] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72324 ] 00:14:19.370 [2024-11-20 08:27:06.727316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.370 [2024-11-20 08:27:06.803578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.370 [2024-11-20 08:27:06.878491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:20.305 08:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:14:20.305 08:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@871 -- # return 0 00:14:20.305 08:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jQCv2Cn94U 00:14:20.305 08:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:20.563 [2024-11-20 08:27:08.049565] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:20.563 nvme0n1 00:14:20.822 08:27:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:20.822 Running I/O for 1 seconds... 00:14:21.757 3949.00 IOPS, 15.43 MiB/s 00:14:21.757 Latency(us) 00:14:21.757 [2024-11-20T08:27:09.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.757 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:21.757 Verification LBA range: start 0x0 length 0x2000 00:14:21.757 nvme0n1 : 1.02 4006.22 15.65 0.00 0.00 31650.60 5659.93 26691.03 00:14:21.757 [2024-11-20T08:27:09.318Z] =================================================================================================================== 00:14:21.757 [2024-11-20T08:27:09.318Z] Total : 4006.22 15.65 0.00 0.00 31650.60 5659.93 26691.03 00:14:21.757 { 00:14:21.757 "results": [ 00:14:21.757 { 00:14:21.757 "job": "nvme0n1", 00:14:21.757 "core_mask": "0x2", 00:14:21.757 "workload": "verify", 00:14:21.757 "status": "finished", 00:14:21.757 "verify_range": { 00:14:21.757 "start": 0, 00:14:21.757 "length": 8192 00:14:21.757 }, 00:14:21.757 "queue_depth": 128, 00:14:21.757 "io_size": 4096, 00:14:21.757 "runtime": 1.017916, 00:14:21.757 "iops": 4006.224482177311, 00:14:21.757 "mibps": 15.649314383505121, 00:14:21.757 "io_failed": 0, 00:14:21.757 "io_timeout": 0, 00:14:21.757 "avg_latency_us": 31650.59863569486, 00:14:21.757 "min_latency_us": 5659.927272727273, 00:14:21.757 "max_latency_us": 26691.025454545455 00:14:21.757 } 00:14:21.757 ], 00:14:21.757 "core_count": 1 00:14:21.757 } 00:14:21.757 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72324 00:14:21.757 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' -z 72324 ']' 00:14:21.757 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # kill -0 72324 00:14:21.757 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # uname 00:14:21.757 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:14:21.757 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 72324 00:14:22.015 killing process with pid 72324 00:14:22.015 Received shutdown signal, test time was about 1.000000 seconds 00:14:22.015 00:14:22.015 Latency(us) 00:14:22.015 [2024-11-20T08:27:09.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.015 [2024-11-20T08:27:09.576Z] =================================================================================================================== 00:14:22.015 [2024-11-20T08:27:09.576Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:22.015 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:14:22.015 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:14:22.015 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # echo 'killing process with pid 72324' 00:14:22.015 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # kill 72324 00:14:22.015 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@981 -- # wait 72324 00:14:22.272 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72263 00:14:22.272 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' -z 72263 ']' 00:14:22.272 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # kill -0 72263 00:14:22.272 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # uname 00:14:22.272 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:14:22.272 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 72263 00:14:22.272 killing process with pid 72263 00:14:22.272 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:14:22.273 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:14:22.273 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # echo 'killing process with pid 72263' 00:14:22.273 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # kill 72263 00:14:22.273 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@981 -- # wait 72263 00:14:22.273 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:14:22.273 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:22.273 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:22.273 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.273 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72375 00:14:22.273 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72375 00:14:22.273 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:22.273 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # '[' -z 72375 ']' 00:14:22.273 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.273 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # local max_retries=100 00:14:22.273 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.273 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@847 -- # xtrace_disable 00:14:22.273 08:27:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.531 [2024-11-20 08:27:09.879258] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:14:22.531 [2024-11-20 08:27:09.879359] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.531 [2024-11-20 08:27:10.029233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.531 [2024-11-20 08:27:10.078216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.531 [2024-11-20 08:27:10.078273] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.531 [2024-11-20 08:27:10.078300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.531 [2024-11-20 08:27:10.078308] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.531 [2024-11-20 08:27:10.078314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.531 [2024-11-20 08:27:10.078676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.790 [2024-11-20 08:27:10.131412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:22.790 08:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:14:22.790 08:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@871 -- # return 0 00:14:22.790 08:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:22.790 08:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@735 -- # xtrace_disable 00:14:22.790 08:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.790 08:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.790 08:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:14:22.790 08:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:22.790 08:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.790 [2024-11-20 08:27:10.241369] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.790 malloc0 00:14:22.790 [2024-11-20 08:27:10.272644] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:22.790 [2024-11-20 08:27:10.273129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:22.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:22.790 08:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:22.790 08:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72401 00:14:22.790 08:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72401 /var/tmp/bdevperf.sock 00:14:22.790 08:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # '[' -z 72401 ']' 00:14:22.790 08:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:22.790 08:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:22.790 08:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # local max_retries=100 00:14:22.790 08:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:22.790 08:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@847 -- # xtrace_disable 00:14:22.790 08:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.049 [2024-11-20 08:27:10.352323] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:14:23.049 [2024-11-20 08:27:10.352565] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72401 ] 00:14:23.049 [2024-11-20 08:27:10.492600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.049 [2024-11-20 08:27:10.554052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.307 [2024-11-20 08:27:10.627437] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:23.307 08:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:14:23.308 08:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@871 -- # return 0 00:14:23.308 08:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jQCv2Cn94U 00:14:23.582 08:27:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:23.894 [2024-11-20 08:27:11.289286] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:23.894 nvme0n1 00:14:23.894 08:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:24.153 Running I/O for 1 seconds... 00:14:25.088 3949.00 IOPS, 15.43 MiB/s 00:14:25.088 Latency(us) 00:14:25.088 [2024-11-20T08:27:12.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.088 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:25.088 Verification LBA range: start 0x0 length 0x2000 00:14:25.088 nvme0n1 : 1.01 4016.89 15.69 0.00 0.00 31617.38 4557.73 25499.46 00:14:25.088 [2024-11-20T08:27:12.649Z] =================================================================================================================== 00:14:25.088 [2024-11-20T08:27:12.649Z] Total : 4016.89 15.69 0.00 0.00 31617.38 4557.73 25499.46 00:14:25.088 { 00:14:25.088 "results": [ 00:14:25.088 { 00:14:25.088 "job": "nvme0n1", 00:14:25.088 "core_mask": "0x2", 00:14:25.088 "workload": "verify", 00:14:25.088 "status": "finished", 00:14:25.088 "verify_range": { 00:14:25.088 "start": 0, 00:14:25.088 "length": 8192 00:14:25.088 }, 00:14:25.088 "queue_depth": 128, 00:14:25.088 "io_size": 4096, 00:14:25.088 "runtime": 1.014964, 00:14:25.088 "iops": 4016.891239492238, 00:14:25.088 "mibps": 15.690981404266555, 00:14:25.088 "io_failed": 0, 00:14:25.088 "io_timeout": 0, 00:14:25.088 "avg_latency_us": 31617.375357103036, 00:14:25.088 "min_latency_us": 4557.730909090909, 00:14:25.088 "max_latency_us": 25499.46181818182 00:14:25.088 } 00:14:25.088 ], 00:14:25.088 "core_count": 1 00:14:25.088 } 00:14:25.088 08:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:14:25.088 08:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:25.088 08:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:25.351 08:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:25.351 08:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:14:25.351 "subsystems": [ 00:14:25.351 { 00:14:25.351 "subsystem": "keyring", 00:14:25.351 "config": [ 00:14:25.351 { 00:14:25.351 "method": "keyring_file_add_key", 00:14:25.351 "params": { 00:14:25.351 "name": "key0", 00:14:25.351 "path": "/tmp/tmp.jQCv2Cn94U" 00:14:25.351 } 00:14:25.351 } 00:14:25.351 ] 00:14:25.351 }, 00:14:25.351 { 00:14:25.351 "subsystem": "iobuf", 00:14:25.351 "config": [ 00:14:25.351 { 00:14:25.351 "method": "iobuf_set_options", 00:14:25.351 "params": { 00:14:25.351 "small_pool_count": 8192, 00:14:25.351 "large_pool_count": 1024, 00:14:25.351 "small_bufsize": 8192, 00:14:25.351 "large_bufsize": 135168, 00:14:25.351 "enable_numa": false 00:14:25.351 } 00:14:25.351 } 00:14:25.352 ] 00:14:25.352 }, 00:14:25.352 { 00:14:25.352 "subsystem": "sock", 00:14:25.352 "config": [ 00:14:25.352 { 00:14:25.352 "method": "sock_set_default_impl", 00:14:25.352 "params": { 00:14:25.352 "impl_name": "uring" 00:14:25.352 } 00:14:25.352 }, 00:14:25.352 { 00:14:25.352 "method": "sock_impl_set_options", 00:14:25.352 "params": { 00:14:25.352 "impl_name": "ssl", 00:14:25.352 "recv_buf_size": 4096, 00:14:25.352 "send_buf_size": 4096, 00:14:25.352 "enable_recv_pipe": true, 00:14:25.352 "enable_quickack": false, 00:14:25.352 "enable_placement_id": 0, 00:14:25.352 "enable_zerocopy_send_server": true, 00:14:25.352 "enable_zerocopy_send_client": false, 00:14:25.352 "zerocopy_threshold": 0, 00:14:25.352 "tls_version": 0, 00:14:25.352 "enable_ktls": false 00:14:25.352 } 00:14:25.352 }, 00:14:25.352 { 00:14:25.352 "method": "sock_impl_set_options", 00:14:25.352 "params": { 00:14:25.352 "impl_name": "posix", 00:14:25.352 "recv_buf_size": 2097152, 00:14:25.352 "send_buf_size": 2097152, 00:14:25.352 "enable_recv_pipe": true, 00:14:25.352 "enable_quickack": false, 00:14:25.352 "enable_placement_id": 0, 00:14:25.352 "enable_zerocopy_send_server": true, 00:14:25.352 "enable_zerocopy_send_client": false, 00:14:25.352 "zerocopy_threshold": 0, 00:14:25.352 "tls_version": 0, 00:14:25.352 "enable_ktls": false 00:14:25.352 } 00:14:25.352 }, 00:14:25.352 { 00:14:25.352 "method": "sock_impl_set_options", 00:14:25.352 "params": { 00:14:25.352 "impl_name": "uring", 00:14:25.352 "recv_buf_size": 2097152, 00:14:25.352 "send_buf_size": 2097152, 00:14:25.352 "enable_recv_pipe": true, 00:14:25.352 "enable_quickack": false, 00:14:25.352 "enable_placement_id": 0, 00:14:25.352 "enable_zerocopy_send_server": false, 00:14:25.352 "enable_zerocopy_send_client": false, 00:14:25.352 "zerocopy_threshold": 0, 00:14:25.352 "tls_version": 0, 00:14:25.352 "enable_ktls": false 00:14:25.352 } 00:14:25.352 } 00:14:25.352 ] 00:14:25.352 }, 00:14:25.352 { 00:14:25.352 "subsystem": "vmd", 00:14:25.352 "config": [] 00:14:25.352 }, 00:14:25.352 { 00:14:25.352 "subsystem": "accel", 00:14:25.352 "config": [ 00:14:25.352 { 00:14:25.352 "method": "accel_set_options", 00:14:25.352 "params": { 00:14:25.352 "small_cache_size": 128, 00:14:25.352 "large_cache_size": 16, 00:14:25.352 "task_count": 2048, 00:14:25.352 "sequence_count": 2048, 00:14:25.352 "buf_count": 2048 00:14:25.352 } 00:14:25.352 } 00:14:25.352 ] 00:14:25.352 }, 00:14:25.352 { 00:14:25.353 "subsystem": "bdev", 00:14:25.353 "config": [ 00:14:25.353 { 00:14:25.353 "method": "bdev_set_options", 00:14:25.353 "params": { 00:14:25.353 "bdev_io_pool_size": 65535, 00:14:25.353 "bdev_io_cache_size": 256, 00:14:25.353 "bdev_auto_examine": true, 00:14:25.353 "iobuf_small_cache_size": 128, 00:14:25.353 "iobuf_large_cache_size": 16 00:14:25.353 } 00:14:25.353 }, 00:14:25.353 { 00:14:25.353 "method": "bdev_raid_set_options", 00:14:25.353 "params": { 00:14:25.353 "process_window_size_kb": 1024, 00:14:25.353 "process_max_bandwidth_mb_sec": 0 00:14:25.353 } 00:14:25.353 }, 00:14:25.353 { 00:14:25.353 "method": "bdev_iscsi_set_options", 00:14:25.353 "params": { 00:14:25.353 "timeout_sec": 30 00:14:25.353 } 00:14:25.353 }, 00:14:25.353 { 00:14:25.353 "method": "bdev_nvme_set_options", 00:14:25.353 "params": { 00:14:25.353 "action_on_timeout": "none", 00:14:25.353 "timeout_us": 0, 00:14:25.353 "timeout_admin_us": 0, 00:14:25.353 "keep_alive_timeout_ms": 10000, 00:14:25.353 "arbitration_burst": 0, 00:14:25.353 "low_priority_weight": 0, 00:14:25.353 "medium_priority_weight": 0, 00:14:25.353 "high_priority_weight": 0, 00:14:25.353 "nvme_adminq_poll_period_us": 10000, 00:14:25.353 "nvme_ioq_poll_period_us": 0, 00:14:25.353 "io_queue_requests": 0, 00:14:25.353 "delay_cmd_submit": true, 00:14:25.353 "transport_retry_count": 4, 00:14:25.353 "bdev_retry_count": 3, 00:14:25.353 "transport_ack_timeout": 0, 00:14:25.353 "ctrlr_loss_timeout_sec": 0, 00:14:25.353 "reconnect_delay_sec": 0, 00:14:25.353 "fast_io_fail_timeout_sec": 0, 00:14:25.353 "disable_auto_failback": false, 00:14:25.353 "generate_uuids": false, 00:14:25.353 "transport_tos": 0, 00:14:25.353 "nvme_error_stat": false, 00:14:25.353 "rdma_srq_size": 0, 00:14:25.353 "io_path_stat": false, 00:14:25.353 "allow_accel_sequence": false, 00:14:25.353 "rdma_max_cq_size": 0, 00:14:25.353 "rdma_cm_event_timeout_ms": 0, 00:14:25.353 "dhchap_digests": [ 00:14:25.353 "sha256", 00:14:25.353 "sha384", 00:14:25.353 "sha512" 00:14:25.353 ], 00:14:25.353 "dhchap_dhgroups": [ 00:14:25.353 "null", 00:14:25.353 "ffdhe2048", 00:14:25.353 "ffdhe3072", 00:14:25.353 "ffdhe4096", 00:14:25.353 "ffdhe6144", 00:14:25.353 "ffdhe8192" 00:14:25.353 ] 00:14:25.353 } 00:14:25.353 }, 00:14:25.353 { 00:14:25.353 "method": "bdev_nvme_set_hotplug", 00:14:25.353 "params": { 00:14:25.353 "period_us": 100000, 00:14:25.353 "enable": false 00:14:25.353 } 00:14:25.353 }, 00:14:25.353 { 00:14:25.353 "method": "bdev_malloc_create", 00:14:25.353 "params": { 00:14:25.353 "name": "malloc0", 00:14:25.353 "num_blocks": 8192, 00:14:25.353 "block_size": 4096, 00:14:25.353 "physical_block_size": 4096, 00:14:25.353 "uuid": "b3973515-c3bb-4215-82e5-58248e322637", 00:14:25.353 "optimal_io_boundary": 0, 00:14:25.353 "md_size": 0, 00:14:25.353 "dif_type": 0, 00:14:25.353 "dif_is_head_of_md": false, 00:14:25.353 "dif_pi_format": 0 00:14:25.353 } 00:14:25.353 }, 00:14:25.353 { 00:14:25.353 "method": "bdev_wait_for_examine" 00:14:25.353 } 00:14:25.353 ] 00:14:25.353 }, 00:14:25.353 { 00:14:25.353 "subsystem": "nbd", 00:14:25.353 "config": [] 00:14:25.353 }, 00:14:25.353 { 00:14:25.353 "subsystem": "scheduler", 00:14:25.353 "config": [ 00:14:25.353 { 00:14:25.353 "method": "framework_set_scheduler", 00:14:25.353 "params": { 00:14:25.354 "name": "static" 00:14:25.354 } 00:14:25.354 } 00:14:25.354 ] 00:14:25.354 }, 00:14:25.354 { 00:14:25.354 "subsystem": "nvmf", 00:14:25.354 "config": [ 00:14:25.354 { 00:14:25.354 "method": "nvmf_set_config", 00:14:25.354 "params": { 00:14:25.354 "discovery_filter": "match_any", 00:14:25.354 "admin_cmd_passthru": { 00:14:25.354 "identify_ctrlr": false 00:14:25.354 }, 00:14:25.354 "dhchap_digests": [ 00:14:25.354 "sha256", 00:14:25.354 "sha384", 00:14:25.354 "sha512" 00:14:25.354 ], 00:14:25.354 "dhchap_dhgroups": [ 00:14:25.354 "null", 00:14:25.354 "ffdhe2048", 00:14:25.354 "ffdhe3072", 00:14:25.354 "ffdhe4096", 00:14:25.354 "ffdhe6144", 00:14:25.354 "ffdhe8192" 00:14:25.354 ] 00:14:25.354 } 00:14:25.354 }, 00:14:25.354 { 00:14:25.354 "method": "nvmf_set_max_subsystems", 00:14:25.354 "params": { 00:14:25.355 "max_subsystems": 1024 00:14:25.355 } 00:14:25.355 }, 00:14:25.355 { 00:14:25.355 "method": "nvmf_set_crdt", 00:14:25.355 "params": { 00:14:25.355 "crdt1": 0, 00:14:25.355 "crdt2": 0, 00:14:25.355 "crdt3": 0 00:14:25.355 } 00:14:25.355 }, 00:14:25.355 { 00:14:25.355 "method": "nvmf_create_transport", 00:14:25.355 "params": { 00:14:25.355 "trtype": "TCP", 00:14:25.355 "max_queue_depth": 128, 00:14:25.355 "max_io_qpairs_per_ctrlr": 127, 00:14:25.355 "in_capsule_data_size": 4096, 00:14:25.355 "max_io_size": 131072, 00:14:25.355 "io_unit_size": 131072, 00:14:25.355 "max_aq_depth": 128, 00:14:25.355 "num_shared_buffers": 511, 00:14:25.355 "buf_cache_size": 4294967295, 00:14:25.355 "dif_insert_or_strip": false, 00:14:25.355 "zcopy": false, 00:14:25.355 "c2h_success": false, 00:14:25.355 "sock_priority": 0, 00:14:25.355 "abort_timeout_sec": 1, 00:14:25.355 "ack_timeout": 0, 00:14:25.355 "data_wr_pool_size": 0 00:14:25.355 } 00:14:25.355 }, 00:14:25.355 { 00:14:25.355 "method": "nvmf_create_subsystem", 00:14:25.355 "params": { 00:14:25.355 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.355 "allow_any_host": false, 00:14:25.355 "serial_number": "00000000000000000000", 00:14:25.355 "model_number": "SPDK bdev Controller", 00:14:25.355 "max_namespaces": 32, 00:14:25.355 "min_cntlid": 1, 00:14:25.355 "max_cntlid": 65519, 00:14:25.355 "ana_reporting": false 00:14:25.355 } 00:14:25.355 }, 00:14:25.355 { 00:14:25.355 "method": "nvmf_subsystem_add_host", 00:14:25.355 "params": { 00:14:25.355 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.355 "host": "nqn.2016-06.io.spdk:host1", 00:14:25.355 "psk": "key0" 00:14:25.355 } 00:14:25.355 }, 00:14:25.355 { 00:14:25.355 "method": "nvmf_subsystem_add_ns", 00:14:25.355 "params": { 00:14:25.355 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.356 "namespace": { 00:14:25.356 "nsid": 1, 00:14:25.356 "bdev_name": "malloc0", 00:14:25.356 "nguid": "B3973515C3BB421582E558248E322637", 00:14:25.356 "uuid": "b3973515-c3bb-4215-82e5-58248e322637", 00:14:25.356 "no_auto_visible": false 00:14:25.356 } 00:14:25.356 } 00:14:25.356 }, 00:14:25.356 { 00:14:25.356 "method": "nvmf_subsystem_add_listener", 00:14:25.356 "params": { 00:14:25.356 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.356 "listen_address": { 00:14:25.356 "trtype": "TCP", 00:14:25.357 "adrfam": "IPv4", 00:14:25.357 "traddr": "10.0.0.3", 00:14:25.357 "trsvcid": "4420" 00:14:25.357 }, 00:14:25.357 "secure_channel": false, 00:14:25.357 "sock_impl": "ssl" 00:14:25.357 } 00:14:25.357 } 00:14:25.357 ] 00:14:25.357 } 00:14:25.357 ] 00:14:25.357 }' 00:14:25.357 08:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:25.621 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:14:25.621 "subsystems": [ 00:14:25.621 { 00:14:25.621 "subsystem": "keyring", 00:14:25.621 "config": [ 00:14:25.621 { 00:14:25.621 "method": "keyring_file_add_key", 00:14:25.621 "params": { 00:14:25.621 "name": "key0", 00:14:25.621 "path": "/tmp/tmp.jQCv2Cn94U" 00:14:25.621 } 00:14:25.621 } 00:14:25.621 ] 00:14:25.621 }, 00:14:25.621 { 00:14:25.621 "subsystem": "iobuf", 00:14:25.621 "config": [ 00:14:25.621 { 00:14:25.621 "method": "iobuf_set_options", 00:14:25.621 "params": { 00:14:25.621 "small_pool_count": 8192, 00:14:25.621 "large_pool_count": 1024, 00:14:25.621 "small_bufsize": 8192, 00:14:25.621 "large_bufsize": 135168, 00:14:25.621 "enable_numa": false 00:14:25.621 } 00:14:25.621 } 00:14:25.621 ] 00:14:25.621 }, 00:14:25.621 { 00:14:25.621 "subsystem": "sock", 00:14:25.621 "config": [ 00:14:25.621 { 00:14:25.621 "method": "sock_set_default_impl", 00:14:25.621 "params": { 00:14:25.621 "impl_name": "uring" 00:14:25.621 } 00:14:25.621 }, 00:14:25.621 { 00:14:25.621 "method": "sock_impl_set_options", 00:14:25.621 "params": { 00:14:25.621 "impl_name": "ssl", 00:14:25.621 "recv_buf_size": 4096, 00:14:25.621 "send_buf_size": 4096, 00:14:25.621 "enable_recv_pipe": true, 00:14:25.621 "enable_quickack": false, 00:14:25.621 "enable_placement_id": 0, 00:14:25.621 "enable_zerocopy_send_server": true, 00:14:25.621 "enable_zerocopy_send_client": false, 00:14:25.621 "zerocopy_threshold": 0, 00:14:25.621 "tls_version": 0, 00:14:25.621 "enable_ktls": false 00:14:25.621 } 00:14:25.621 }, 00:14:25.621 { 00:14:25.621 "method": "sock_impl_set_options", 00:14:25.621 "params": { 00:14:25.621 "impl_name": "posix", 00:14:25.621 "recv_buf_size": 2097152, 00:14:25.621 "send_buf_size": 2097152, 00:14:25.621 "enable_recv_pipe": true, 00:14:25.621 "enable_quickack": false, 00:14:25.621 "enable_placement_id": 0, 00:14:25.621 "enable_zerocopy_send_server": true, 00:14:25.621 "enable_zerocopy_send_client": false, 00:14:25.621 "zerocopy_threshold": 0, 00:14:25.621 "tls_version": 0, 00:14:25.621 "enable_ktls": false 00:14:25.621 } 00:14:25.621 }, 00:14:25.621 { 00:14:25.621 "method": "sock_impl_set_options", 00:14:25.621 "params": { 00:14:25.621 "impl_name": "uring", 00:14:25.621 "recv_buf_size": 2097152, 00:14:25.621 "send_buf_size": 2097152, 00:14:25.621 "enable_recv_pipe": true, 00:14:25.621 "enable_quickack": false, 00:14:25.621 "enable_placement_id": 0, 00:14:25.621 "enable_zerocopy_send_server": false, 00:14:25.621 "enable_zerocopy_send_client": false, 00:14:25.621 "zerocopy_threshold": 0, 00:14:25.621 "tls_version": 0, 00:14:25.621 "enable_ktls": false 00:14:25.621 } 00:14:25.622 } 00:14:25.622 ] 00:14:25.622 }, 00:14:25.622 { 00:14:25.622 "subsystem": "vmd", 00:14:25.622 "config": [] 00:14:25.622 }, 00:14:25.622 { 00:14:25.622 "subsystem": "accel", 00:14:25.622 "config": [ 00:14:25.622 { 00:14:25.622 "method": "accel_set_options", 00:14:25.622 "params": { 00:14:25.622 "small_cache_size": 128, 00:14:25.622 "large_cache_size": 16, 00:14:25.622 "task_count": 2048, 00:14:25.622 "sequence_count": 2048, 00:14:25.622 "buf_count": 2048 00:14:25.622 } 00:14:25.622 } 00:14:25.622 ] 00:14:25.622 }, 00:14:25.622 { 00:14:25.622 "subsystem": "bdev", 00:14:25.622 "config": [ 00:14:25.622 { 00:14:25.622 "method": "bdev_set_options", 00:14:25.622 "params": { 00:14:25.622 "bdev_io_pool_size": 65535, 00:14:25.622 "bdev_io_cache_size": 256, 00:14:25.622 "bdev_auto_examine": true, 00:14:25.622 "iobuf_small_cache_size": 128, 00:14:25.622 "iobuf_large_cache_size": 16 00:14:25.622 } 00:14:25.622 }, 00:14:25.622 { 00:14:25.622 "method": "bdev_raid_set_options", 00:14:25.622 "params": { 00:14:25.622 "process_window_size_kb": 1024, 00:14:25.622 "process_max_bandwidth_mb_sec": 0 00:14:25.622 } 00:14:25.622 }, 00:14:25.622 { 00:14:25.622 "method": "bdev_iscsi_set_options", 00:14:25.622 "params": { 00:14:25.622 "timeout_sec": 30 00:14:25.622 } 00:14:25.622 }, 00:14:25.622 { 00:14:25.622 "method": "bdev_nvme_set_options", 00:14:25.622 "params": { 00:14:25.622 "action_on_timeout": "none", 00:14:25.622 "timeout_us": 0, 00:14:25.622 "timeout_admin_us": 0, 00:14:25.622 "keep_alive_timeout_ms": 10000, 00:14:25.622 "arbitration_burst": 0, 00:14:25.622 "low_priority_weight": 0, 00:14:25.622 "medium_priority_weight": 0, 00:14:25.622 "high_priority_weight": 0, 00:14:25.622 "nvme_adminq_poll_period_us": 10000, 00:14:25.622 "nvme_ioq_poll_period_us": 0, 00:14:25.622 "io_queue_requests": 512, 00:14:25.622 "delay_cmd_submit": true, 00:14:25.622 "transport_retry_count": 4, 00:14:25.622 "bdev_retry_count": 3, 00:14:25.622 "transport_ack_timeout": 0, 00:14:25.622 "ctrlr_loss_timeout_sec": 0, 00:14:25.622 "reconnect_delay_sec": 0, 00:14:25.622 "fast_io_fail_timeout_sec": 0, 00:14:25.622 "disable_auto_failback": false, 00:14:25.622 "generate_uuids": false, 00:14:25.622 "transport_tos": 0, 00:14:25.622 "nvme_error_stat": false, 00:14:25.622 "rdma_srq_size": 0, 00:14:25.622 "io_path_stat": false, 00:14:25.622 "allow_accel_sequence": false, 00:14:25.622 "rdma_max_cq_size": 0, 00:14:25.622 "rdma_cm_event_timeout_ms": 0, 00:14:25.622 "dhchap_digests": [ 00:14:25.622 "sha256", 00:14:25.622 "sha384", 00:14:25.622 "sha512" 00:14:25.622 ], 00:14:25.622 "dhchap_dhgroups": [ 00:14:25.622 "null", 00:14:25.622 "ffdhe2048", 00:14:25.622 "ffdhe3072", 00:14:25.622 "ffdhe4096", 00:14:25.622 "ffdhe6144", 00:14:25.622 "ffdhe8192" 00:14:25.622 ] 00:14:25.622 } 00:14:25.622 }, 00:14:25.622 { 00:14:25.622 "method": "bdev_nvme_attach_controller", 00:14:25.622 "params": { 00:14:25.622 "name": "nvme0", 00:14:25.622 "trtype": "TCP", 00:14:25.622 "adrfam": "IPv4", 00:14:25.622 "traddr": "10.0.0.3", 00:14:25.622 "trsvcid": "4420", 00:14:25.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.622 "prchk_reftag": false, 00:14:25.622 "prchk_guard": false, 00:14:25.622 "ctrlr_loss_timeout_sec": 0, 00:14:25.622 "reconnect_delay_sec": 0, 00:14:25.622 "fast_io_fail_timeout_sec": 0, 00:14:25.622 "psk": "key0", 00:14:25.622 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:25.622 "hdgst": false, 00:14:25.622 "ddgst": false, 00:14:25.622 "multipath": "multipath" 00:14:25.622 } 00:14:25.622 }, 00:14:25.622 { 00:14:25.622 "method": "bdev_nvme_set_hotplug", 00:14:25.622 "params": { 00:14:25.622 "period_us": 100000, 00:14:25.622 "enable": false 00:14:25.622 } 00:14:25.622 }, 00:14:25.622 { 00:14:25.622 "method": "bdev_enable_histogram", 00:14:25.622 "params": { 00:14:25.622 "name": "nvme0n1", 00:14:25.622 "enable": true 00:14:25.622 } 00:14:25.622 }, 00:14:25.622 { 00:14:25.622 "method": "bdev_wait_for_examine" 00:14:25.622 } 00:14:25.622 ] 00:14:25.622 }, 00:14:25.622 { 00:14:25.622 "subsystem": "nbd", 00:14:25.622 "config": [] 00:14:25.622 } 00:14:25.622 ] 00:14:25.622 }' 00:14:25.622 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72401 00:14:25.622 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' -z 72401 ']' 00:14:25.622 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # kill -0 72401 00:14:25.622 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # uname 00:14:25.622 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:14:25.622 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 72401 00:14:25.622 killing process with pid 72401 00:14:25.622 Received shutdown signal, test time was about 1.000000 seconds 00:14:25.622 00:14:25.622 Latency(us) 00:14:25.622 [2024-11-20T08:27:13.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.622 [2024-11-20T08:27:13.183Z] =================================================================================================================== 00:14:25.622 [2024-11-20T08:27:13.183Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:25.622 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:14:25.622 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:14:25.622 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # echo 'killing process with pid 72401' 00:14:25.622 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # kill 72401 00:14:25.622 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@981 -- # wait 72401 00:14:25.881 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72375 00:14:25.881 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' -z 72375 ']' 00:14:25.881 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # kill -0 72375 00:14:25.881 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # uname 00:14:25.881 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:14:25.881 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 72375 00:14:25.881 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:14:25.881 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:14:25.881 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # echo 'killing process with pid 72375' 00:14:25.881 killing process with pid 72375 00:14:25.881 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # kill 72375 00:14:25.881 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@981 -- # wait 72375 00:14:26.141 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:14:26.141 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:26.141 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:26.141 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.141 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:14:26.141 "subsystems": [ 00:14:26.141 { 00:14:26.141 "subsystem": "keyring", 00:14:26.141 "config": [ 00:14:26.141 { 00:14:26.141 "method": "keyring_file_add_key", 00:14:26.141 "params": { 00:14:26.141 "name": "key0", 00:14:26.141 "path": "/tmp/tmp.jQCv2Cn94U" 00:14:26.141 } 00:14:26.141 } 00:14:26.141 ] 00:14:26.141 }, 00:14:26.141 { 00:14:26.141 "subsystem": "iobuf", 00:14:26.141 "config": [ 00:14:26.141 { 00:14:26.141 "method": "iobuf_set_options", 00:14:26.141 "params": { 00:14:26.141 "small_pool_count": 8192, 00:14:26.141 "large_pool_count": 1024, 00:14:26.141 "small_bufsize": 8192, 00:14:26.141 "large_bufsize": 135168, 00:14:26.141 "enable_numa": false 00:14:26.141 } 00:14:26.141 } 00:14:26.141 ] 00:14:26.141 }, 00:14:26.141 { 00:14:26.141 "subsystem": "sock", 00:14:26.141 "config": [ 00:14:26.141 { 00:14:26.141 "method": "sock_set_default_impl", 00:14:26.141 "params": { 00:14:26.141 "impl_name": "uring" 00:14:26.141 } 00:14:26.141 }, 00:14:26.141 { 00:14:26.141 "method": "sock_impl_set_options", 00:14:26.141 "params": { 00:14:26.141 "impl_name": "ssl", 00:14:26.141 "recv_buf_size": 4096, 00:14:26.141 "send_buf_size": 4096, 00:14:26.141 "enable_recv_pipe": true, 00:14:26.141 "enable_quickack": false, 00:14:26.141 "enable_placement_id": 0, 00:14:26.141 "enable_zerocopy_send_server": true, 00:14:26.141 "enable_zerocopy_send_client": false, 00:14:26.141 "zerocopy_threshold": 0, 00:14:26.141 "tls_version": 0, 00:14:26.141 "enable_ktls": false 00:14:26.141 } 00:14:26.141 }, 00:14:26.141 { 00:14:26.141 "method": "sock_impl_set_options", 00:14:26.141 "params": { 00:14:26.141 "impl_name": "posix", 00:14:26.141 "recv_buf_size": 2097152, 00:14:26.141 "send_buf_size": 2097152, 00:14:26.141 "enable_recv_pipe": true, 00:14:26.141 "enable_quickack": false, 00:14:26.141 "enable_placement_id": 0, 00:14:26.141 "enable_zerocopy_send_server": true, 00:14:26.141 "enable_zerocopy_send_client": false, 00:14:26.141 "zerocopy_threshold": 0, 00:14:26.141 "tls_version": 0, 00:14:26.141 "enable_ktls": false 00:14:26.141 } 00:14:26.141 }, 00:14:26.141 { 00:14:26.141 "method": "sock_impl_set_options", 00:14:26.141 "params": { 00:14:26.141 "impl_name": "uring", 00:14:26.141 "recv_buf_size": 2097152, 00:14:26.141 "send_buf_size": 2097152, 00:14:26.141 "enable_recv_pipe": true, 00:14:26.141 "enable_quickack": false, 00:14:26.141 "enable_placement_id": 0, 00:14:26.141 "enable_zerocopy_send_server": false, 00:14:26.141 "enable_zerocopy_send_client": false, 00:14:26.141 "zerocopy_threshold": 0, 00:14:26.141 "tls_version": 0, 00:14:26.141 "enable_ktls": false 00:14:26.141 } 00:14:26.141 } 00:14:26.141 ] 00:14:26.141 }, 00:14:26.141 { 00:14:26.141 "subsystem": "vmd", 00:14:26.141 "config": [] 00:14:26.141 }, 00:14:26.141 { 00:14:26.141 "subsystem": "accel", 00:14:26.141 "config": [ 00:14:26.141 { 00:14:26.141 "method": "accel_set_options", 00:14:26.141 "params": { 00:14:26.141 "small_cache_size": 128, 00:14:26.141 "large_cache_size": 16, 00:14:26.141 "task_count": 2048, 00:14:26.141 "sequence_count": 2048, 00:14:26.141 "buf_count": 2048 00:14:26.141 } 00:14:26.141 } 00:14:26.141 ] 00:14:26.141 }, 00:14:26.141 { 00:14:26.141 "subsystem": "bdev", 00:14:26.141 "config": [ 00:14:26.141 { 00:14:26.141 "method": "bdev_set_options", 00:14:26.141 "params": { 00:14:26.141 "bdev_io_pool_size": 65535, 00:14:26.141 "bdev_io_cache_size": 256, 00:14:26.141 "bdev_auto_examine": true, 00:14:26.141 "iobuf_small_cache_size": 128, 00:14:26.141 "iobuf_large_cache_size": 16 00:14:26.141 } 00:14:26.141 }, 00:14:26.141 { 00:14:26.141 "method": "bdev_raid_set_options", 00:14:26.141 "params": { 00:14:26.141 "process_window_size_kb": 1024, 00:14:26.141 "process_max_bandwidth_mb_sec": 0 00:14:26.141 } 00:14:26.141 }, 00:14:26.141 { 00:14:26.141 "method": "bdev_iscsi_set_options", 00:14:26.141 "params": { 00:14:26.141 "timeout_sec": 30 00:14:26.141 } 00:14:26.141 }, 00:14:26.141 { 00:14:26.141 "method": "bdev_nvme_set_options", 00:14:26.141 "params": { 00:14:26.141 "action_on_timeout": "none", 00:14:26.141 "timeout_us": 0, 00:14:26.141 "timeout_admin_us": 0, 00:14:26.141 "keep_alive_timeout_ms": 10000, 00:14:26.141 "arbitration_burst": 0, 00:14:26.141 "low_priority_weight": 0, 00:14:26.141 "medium_priority_weight": 0, 00:14:26.141 "high_priority_weight": 0, 00:14:26.141 "nvme_adminq_poll_period_us": 10000, 00:14:26.141 "nvme_ioq_poll_period_us": 0, 00:14:26.141 "io_queue_requests": 0, 00:14:26.141 "delay_cmd_submit": true, 00:14:26.141 "transport_retry_count": 4, 00:14:26.141 "bdev_retry_count": 3, 00:14:26.141 "transport_ack_timeout": 0, 00:14:26.141 "ctrlr_loss_timeout_sec": 0, 00:14:26.141 "reconnect_delay_sec": 0, 00:14:26.141 "fast_io_fail_timeout_sec": 0, 00:14:26.141 "disable_auto_failback": false, 00:14:26.141 "generate_uuids": false, 00:14:26.141 "transport_tos": 0, 00:14:26.141 "nvme_error_stat": false, 00:14:26.141 "rdma_srq_size": 0, 00:14:26.141 "io_path_stat": false, 00:14:26.141 "allow_accel_sequence": false, 00:14:26.141 "rdma_max_cq_size": 0, 00:14:26.141 "rdma_cm_event_timeout_ms": 0, 00:14:26.141 "dhchap_digests": [ 00:14:26.141 "sha256", 00:14:26.141 "sha384", 00:14:26.141 "sha512" 00:14:26.141 ], 00:14:26.141 "dhchap_dhgroups": [ 00:14:26.141 "null", 00:14:26.141 "ffdhe2048", 00:14:26.141 "ffdhe3072", 00:14:26.141 "ffdhe4096", 00:14:26.141 "ffdhe6144", 00:14:26.141 "ffdhe8192" 00:14:26.141 ] 00:14:26.141 } 00:14:26.141 }, 00:14:26.141 { 00:14:26.141 "method": "bdev_nvme_set_hotplug", 00:14:26.141 "params": { 00:14:26.141 "period_us": 100000, 00:14:26.141 "enable": false 00:14:26.141 } 00:14:26.141 }, 00:14:26.141 { 00:14:26.141 "method": "bdev_malloc_create", 00:14:26.141 "params": { 00:14:26.141 "name": "malloc0", 00:14:26.141 "num_blocks": 8192, 00:14:26.141 "block_size": 4096, 00:14:26.141 "physical_block_size": 4096, 00:14:26.141 "uuid": "b3973515-c3bb-4215-82e5-58248e322637", 00:14:26.141 "optimal_io_boundary": 0, 00:14:26.141 "md_size": 0, 00:14:26.141 "dif_type": 0, 00:14:26.141 "dif_is_head_of_md": false, 00:14:26.141 "dif_pi_format": 0 00:14:26.141 } 00:14:26.141 }, 00:14:26.141 { 00:14:26.141 "method": "bdev_wait_for_examine" 00:14:26.141 } 00:14:26.141 ] 00:14:26.141 }, 00:14:26.141 { 00:14:26.141 "subsystem": "nbd", 00:14:26.141 "config": [] 00:14:26.141 }, 00:14:26.141 { 00:14:26.141 "subsystem": "scheduler", 00:14:26.141 "config": [ 00:14:26.141 { 00:14:26.141 "method": "framework_set_scheduler", 00:14:26.141 "params": { 00:14:26.141 "name": "static" 00:14:26.142 } 00:14:26.142 } 00:14:26.142 ] 00:14:26.142 }, 00:14:26.142 { 00:14:26.142 "subsystem": "nvmf", 00:14:26.142 "config": [ 00:14:26.142 { 00:14:26.142 "method": "nvmf_set_config", 00:14:26.142 "params": { 00:14:26.142 "discovery_filter": "match_any", 00:14:26.142 "admin_cmd_passthru": { 00:14:26.142 "identify_ctrlr": false 00:14:26.142 }, 00:14:26.142 "dhchap_digests": [ 00:14:26.142 "sha256", 00:14:26.142 "sha384", 00:14:26.142 "sha512" 00:14:26.142 ], 00:14:26.142 "dhchap_dhgroups": [ 00:14:26.142 "null", 00:14:26.142 "ffdhe2048", 00:14:26.142 "ffdhe3072", 00:14:26.142 "ffdhe4096", 00:14:26.142 "ffdhe6144", 00:14:26.142 "ffdhe8192" 00:14:26.142 ] 00:14:26.142 } 00:14:26.142 }, 00:14:26.142 { 00:14:26.142 "method": "nvmf_set_max_subsystems", 00:14:26.142 "params": { 00:14:26.142 "max_subsystems": 1024 00:14:26.142 } 00:14:26.142 }, 00:14:26.142 { 00:14:26.142 "method": "nvmf_set_crdt", 00:14:26.142 "params": { 00:14:26.142 "crdt1": 0, 00:14:26.142 "crdt2": 0, 00:14:26.142 "crdt3": 0 00:14:26.142 } 00:14:26.142 }, 00:14:26.142 { 00:14:26.142 "method": "nvmf_create_transport", 00:14:26.142 "params": { 00:14:26.142 "trtype": "TCP", 00:14:26.142 "max_queue_depth": 128, 00:14:26.142 "max_io_qpairs_per_ctrlr": 127, 00:14:26.142 "in_capsule_data_size": 4096, 00:14:26.142 "max_io_size": 131072, 00:14:26.142 "io_unit_size": 131072, 00:14:26.142 "max_aq_depth": 128, 00:14:26.142 "num_shared_buffers": 511, 00:14:26.142 "buf_cache_size": 4294967295, 00:14:26.142 "dif_insert_or_strip": false, 00:14:26.142 "zcopy": false, 00:14:26.142 "c2h_success": false, 00:14:26.142 "sock_priority": 0, 00:14:26.142 "abort_timeout_sec": 1, 00:14:26.142 "ack_timeout": 0, 00:14:26.142 "data_wr_pool_size": 0 00:14:26.142 } 00:14:26.142 }, 00:14:26.142 { 00:14:26.142 "method": "nvmf_create_subsystem", 00:14:26.142 "params": { 00:14:26.142 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.142 "allow_any_host": false, 00:14:26.142 "serial_number": "00000000000000000000", 00:14:26.142 "model_number": "SPDK bdev Controller", 00:14:26.142 "max_namespaces": 32, 00:14:26.142 "min_cntlid": 1, 00:14:26.142 "max_cntlid": 65519, 00:14:26.142 "ana_reporting": false 00:14:26.142 } 00:14:26.142 }, 00:14:26.142 { 00:14:26.142 "method": "nvmf_subsystem_add_host", 00:14:26.142 "params": { 00:14:26.142 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.142 "host": "nqn.2016-06.io.spdk:host1", 00:14:26.142 "psk": "key0" 00:14:26.142 } 00:14:26.142 }, 00:14:26.142 { 00:14:26.142 "method": "nvmf_subsystem_add_ns", 00:14:26.142 "params": { 00:14:26.142 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.142 "namespace": { 00:14:26.142 "nsid": 1, 00:14:26.142 "bdev_name": "malloc0", 00:14:26.142 "nguid": "B3973515C3BB421582E558248E322637", 00:14:26.142 "uuid": "b3973515-c3bb-4215-82e5-58248e322637", 00:14:26.142 "no_auto_visible": false 00:14:26.142 } 00:14:26.142 } 00:14:26.142 }, 00:14:26.142 { 00:14:26.142 "method": "nvmf_subsystem_add_listener", 00:14:26.142 "params": { 00:14:26.142 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.142 "listen_address": { 00:14:26.142 "trtype": "TCP", 00:14:26.142 "adrfam": "IPv4", 00:14:26.142 "traddr": "10.0.0.3", 00:14:26.142 "trsvcid": "4420" 00:14:26.142 }, 00:14:26.142 "secure_channel": false, 00:14:26.142 "sock_impl": "ssl" 00:14:26.142 } 00:14:26.142 } 00:14:26.142 ] 00:14:26.142 } 00:14:26.142 ] 00:14:26.142 }' 00:14:26.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.142 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72454 00:14:26.142 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72454 00:14:26.142 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # '[' -z 72454 ']' 00:14:26.142 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.142 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # local max_retries=100 00:14:26.142 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.142 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:26.142 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@847 -- # xtrace_disable 00:14:26.142 08:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.142 [2024-11-20 08:27:13.589672] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:14:26.142 [2024-11-20 08:27:13.589992] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.401 [2024-11-20 08:27:13.735703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.401 [2024-11-20 08:27:13.786063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.401 [2024-11-20 08:27:13.786119] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.401 [2024-11-20 08:27:13.786147] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.401 [2024-11-20 08:27:13.786155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.401 [2024-11-20 08:27:13.786161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.401 [2024-11-20 08:27:13.786575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.401 [2024-11-20 08:27:13.957170] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:26.660 [2024-11-20 08:27:14.037672] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.660 [2024-11-20 08:27:14.069631] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:26.660 [2024-11-20 08:27:14.069903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:27.228 08:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:14:27.228 08:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@871 -- # return 0 00:14:27.228 08:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:27.228 08:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@735 -- # xtrace_disable 00:14:27.228 08:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:27.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:27.228 08:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.228 08:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72486 00:14:27.228 08:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72486 /var/tmp/bdevperf.sock 00:14:27.228 08:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # '[' -z 72486 ']' 00:14:27.228 08:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:27.228 08:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@843 -- # local max_retries=100 00:14:27.228 08:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:27.228 08:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@847 -- # xtrace_disable 00:14:27.228 08:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:27.228 08:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:27.228 08:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:14:27.228 "subsystems": [ 00:14:27.228 { 00:14:27.228 "subsystem": "keyring", 00:14:27.228 "config": [ 00:14:27.228 { 00:14:27.228 "method": "keyring_file_add_key", 00:14:27.228 "params": { 00:14:27.228 "name": "key0", 00:14:27.228 "path": "/tmp/tmp.jQCv2Cn94U" 00:14:27.228 } 00:14:27.228 } 00:14:27.228 ] 00:14:27.228 }, 00:14:27.228 { 00:14:27.228 "subsystem": "iobuf", 00:14:27.228 "config": [ 00:14:27.228 { 00:14:27.228 "method": "iobuf_set_options", 00:14:27.228 "params": { 00:14:27.228 "small_pool_count": 8192, 00:14:27.228 "large_pool_count": 1024, 00:14:27.228 "small_bufsize": 8192, 00:14:27.228 "large_bufsize": 135168, 00:14:27.228 "enable_numa": false 00:14:27.228 } 00:14:27.228 } 00:14:27.228 ] 00:14:27.228 }, 00:14:27.228 { 00:14:27.228 "subsystem": "sock", 00:14:27.228 "config": [ 00:14:27.228 { 00:14:27.228 "method": "sock_set_default_impl", 00:14:27.228 "params": { 00:14:27.228 "impl_name": "uring" 00:14:27.228 } 00:14:27.228 }, 00:14:27.228 { 00:14:27.228 "method": "sock_impl_set_options", 00:14:27.228 "params": { 00:14:27.228 "impl_name": "ssl", 00:14:27.228 "recv_buf_size": 4096, 00:14:27.228 "send_buf_size": 4096, 00:14:27.228 "enable_recv_pipe": true, 00:14:27.228 "enable_quickack": false, 00:14:27.228 "enable_placement_id": 0, 00:14:27.228 "enable_zerocopy_send_server": true, 00:14:27.228 "enable_zerocopy_send_client": false, 00:14:27.228 "zerocopy_threshold": 0, 00:14:27.228 "tls_version": 0, 00:14:27.228 "enable_ktls": false 00:14:27.228 } 00:14:27.228 }, 00:14:27.228 { 00:14:27.228 "method": "sock_impl_set_options", 00:14:27.228 "params": { 00:14:27.228 "impl_name": "posix", 00:14:27.228 "recv_buf_size": 2097152, 00:14:27.228 "send_buf_size": 2097152, 00:14:27.228 "enable_recv_pipe": true, 00:14:27.228 "enable_quickack": false, 00:14:27.228 "enable_placement_id": 0, 00:14:27.228 "enable_zerocopy_send_server": true, 00:14:27.228 "enable_zerocopy_send_client": false, 00:14:27.228 "zerocopy_threshold": 0, 00:14:27.228 "tls_version": 0, 00:14:27.228 "enable_ktls": false 00:14:27.228 } 00:14:27.228 }, 00:14:27.228 { 00:14:27.228 "method": "sock_impl_set_options", 00:14:27.229 "params": { 00:14:27.229 "impl_name": "uring", 00:14:27.229 "recv_buf_size": 2097152, 00:14:27.229 "send_buf_size": 2097152, 00:14:27.229 "enable_recv_pipe": true, 00:14:27.229 "enable_quickack": false, 00:14:27.229 "enable_placement_id": 0, 00:14:27.229 "enable_zerocopy_send_server": false, 00:14:27.229 "enable_zerocopy_send_client": false, 00:14:27.229 "zerocopy_threshold": 0, 00:14:27.229 "tls_version": 0, 00:14:27.229 "enable_ktls": false 00:14:27.229 } 00:14:27.229 } 00:14:27.229 ] 00:14:27.229 }, 00:14:27.229 { 00:14:27.229 "subsystem": "vmd", 00:14:27.229 "config": [] 00:14:27.229 }, 00:14:27.229 { 00:14:27.229 "subsystem": "accel", 00:14:27.229 "config": [ 00:14:27.229 { 00:14:27.229 "method": "accel_set_options", 00:14:27.229 "params": { 00:14:27.229 "small_cache_size": 128, 00:14:27.229 "large_cache_size": 16, 00:14:27.229 "task_count": 2048, 00:14:27.229 "sequence_count": 2048, 00:14:27.229 "buf_count": 2048 00:14:27.229 } 00:14:27.229 } 00:14:27.229 ] 00:14:27.229 }, 00:14:27.229 { 00:14:27.229 "subsystem": "bdev", 00:14:27.229 "config": [ 00:14:27.229 { 00:14:27.229 "method": "bdev_set_options", 00:14:27.229 "params": { 00:14:27.229 "bdev_io_pool_size": 65535, 00:14:27.229 "bdev_io_cache_size": 256, 00:14:27.229 "bdev_auto_examine": true, 00:14:27.229 "iobuf_small_cache_size": 128, 00:14:27.229 "iobuf_large_cache_size": 16 00:14:27.229 } 00:14:27.229 }, 00:14:27.229 { 00:14:27.229 "method": "bdev_raid_set_options", 00:14:27.229 "params": { 00:14:27.229 "process_window_size_kb": 1024, 00:14:27.229 "process_max_bandwidth_mb_sec": 0 00:14:27.229 } 00:14:27.229 }, 00:14:27.229 { 00:14:27.229 "method": "bdev_iscsi_set_options", 00:14:27.229 "params": { 00:14:27.229 "timeout_sec": 30 00:14:27.229 } 00:14:27.229 }, 00:14:27.229 { 00:14:27.229 "method": "bdev_nvme_set_options", 00:14:27.229 "params": { 00:14:27.229 "action_on_timeout": "none", 00:14:27.229 "timeout_us": 0, 00:14:27.229 "timeout_admin_us": 0, 00:14:27.229 "keep_alive_timeout_ms": 10000, 00:14:27.229 "arbitration_burst": 0, 00:14:27.229 "low_priority_weight": 0, 00:14:27.229 "medium_priority_weight": 0, 00:14:27.229 "high_priority_weight": 0, 00:14:27.229 "nvme_adminq_poll_period_us": 10000, 00:14:27.229 "nvme_ioq_poll_period_us": 0, 00:14:27.229 "io_queue_requests": 512, 00:14:27.229 "delay_cmd_submit": true, 00:14:27.229 "transport_retry_count": 4, 00:14:27.229 "bdev_retry_count": 3, 00:14:27.229 "transport_ack_timeout": 0, 00:14:27.229 "ctrlr_loss_timeout_sec": 0, 00:14:27.229 "reconnect_delay_sec": 0, 00:14:27.229 "fast_io_fail_timeout_sec": 0, 00:14:27.229 "disable_auto_failback": false, 00:14:27.229 "generate_uuids": false, 00:14:27.229 "transport_tos": 0, 00:14:27.229 "nvme_error_stat": false, 00:14:27.229 "rdma_srq_size": 0, 00:14:27.229 "io_path_stat": false, 00:14:27.229 "allow_accel_sequence": false, 00:14:27.229 "rdma_max_cq_size": 0, 00:14:27.229 "rdma_cm_event_timeout_ms": 0, 00:14:27.229 "dhchap_digests": [ 00:14:27.229 "sha256", 00:14:27.229 "sha384", 00:14:27.229 "sha512" 00:14:27.229 ], 00:14:27.229 "dhchap_dhgroups": [ 00:14:27.229 "null", 00:14:27.229 "ffdhe2048", 00:14:27.229 "ffdhe3072", 00:14:27.229 "ffdhe4096", 00:14:27.229 "ffdhe6144", 00:14:27.229 "ffdhe8192" 00:14:27.229 ] 00:14:27.229 } 00:14:27.229 }, 00:14:27.229 { 00:14:27.229 "method": "bdev_nvme_attach_controller", 00:14:27.229 "params": { 00:14:27.229 "name": "nvme0", 00:14:27.229 "trtype": "TCP", 00:14:27.229 "adrfam": "IPv4", 00:14:27.229 "traddr": "10.0.0.3", 00:14:27.229 "trsvcid": "4420", 00:14:27.229 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:27.229 "prchk_reftag": false, 00:14:27.229 "prchk_guard": false, 00:14:27.229 "ctrlr_loss_timeout_sec": 0, 00:14:27.229 "reconnect_delay_sec": 0, 00:14:27.229 "fast_io_fail_timeout_sec": 0, 00:14:27.229 "psk": "key0", 00:14:27.229 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:27.229 "hdgst": false, 00:14:27.229 "ddgst": false, 00:14:27.229 "multipath": "multipath" 00:14:27.229 } 00:14:27.229 }, 00:14:27.229 { 00:14:27.229 "method": "bdev_nvme_set_hotplug", 00:14:27.229 "params": { 00:14:27.229 "period_us": 100000, 00:14:27.229 "enable": false 00:14:27.229 } 00:14:27.229 }, 00:14:27.229 { 00:14:27.229 "method": "bdev_enable_histogram", 00:14:27.229 "params": { 00:14:27.229 "name": "nvme0n1", 00:14:27.229 "enable": true 00:14:27.229 } 00:14:27.229 }, 00:14:27.229 { 00:14:27.229 "method": "bdev_wait_for_examine" 00:14:27.229 } 00:14:27.229 ] 00:14:27.229 }, 00:14:27.229 { 00:14:27.229 "subsystem": "nbd", 00:14:27.229 "config": [] 00:14:27.229 } 00:14:27.229 ] 00:14:27.229 }' 00:14:27.229 [2024-11-20 08:27:14.654128] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:14:27.229 [2024-11-20 08:27:14.654877] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72486 ] 00:14:27.488 [2024-11-20 08:27:14.799351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.488 [2024-11-20 08:27:14.866676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.488 [2024-11-20 08:27:15.021144] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:27.746 [2024-11-20 08:27:15.084259] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:28.312 08:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:14:28.312 08:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@871 -- # return 0 00:14:28.312 08:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:28.312 08:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:14:28.570 08:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.570 08:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:28.570 Running I/O for 1 seconds... 00:14:29.946 4031.00 IOPS, 15.75 MiB/s 00:14:29.946 Latency(us) 00:14:29.946 [2024-11-20T08:27:17.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.946 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:29.946 Verification LBA range: start 0x0 length 0x2000 00:14:29.946 nvme0n1 : 1.02 4095.60 16.00 0.00 0.00 31016.31 4587.52 27525.12 00:14:29.946 [2024-11-20T08:27:17.507Z] =================================================================================================================== 00:14:29.946 [2024-11-20T08:27:17.507Z] Total : 4095.60 16.00 0.00 0.00 31016.31 4587.52 27525.12 00:14:29.946 { 00:14:29.946 "results": [ 00:14:29.946 { 00:14:29.946 "job": "nvme0n1", 00:14:29.946 "core_mask": "0x2", 00:14:29.946 "workload": "verify", 00:14:29.946 "status": "finished", 00:14:29.946 "verify_range": { 00:14:29.946 "start": 0, 00:14:29.946 "length": 8192 00:14:29.946 }, 00:14:29.946 "queue_depth": 128, 00:14:29.946 "io_size": 4096, 00:14:29.946 "runtime": 1.015481, 00:14:29.946 "iops": 4095.596077129951, 00:14:29.946 "mibps": 15.998422176288871, 00:14:29.946 "io_failed": 0, 00:14:29.946 "io_timeout": 0, 00:14:29.946 "avg_latency_us": 31016.30619248508, 00:14:29.946 "min_latency_us": 4587.52, 00:14:29.946 "max_latency_us": 27525.12 00:14:29.946 } 00:14:29.946 ], 00:14:29.946 "core_count": 1 00:14:29.946 } 00:14:29.946 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:14:29.946 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:14:29.946 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:29.946 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@815 -- # type=--id 00:14:29.946 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # id=0 00:14:29.946 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@817 -- # '[' --id = --pid ']' 00:14:29.946 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:29.946 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # shm_files=nvmf_trace.0 00:14:29.946 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # [[ -z nvmf_trace.0 ]] 00:14:29.946 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # for n in $shm_files 00:14:29.946 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@828 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:29.946 nvmf_trace.0 00:14:29.946 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@830 -- # return 0 00:14:29.946 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72486 00:14:29.946 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' -z 72486 ']' 00:14:29.946 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # kill -0 72486 00:14:29.946 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # uname 00:14:29.946 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:14:29.946 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 72486 00:14:29.946 killing process with pid 72486 00:14:29.946 Received shutdown signal, test time was about 1.000000 seconds 00:14:29.946 00:14:29.946 Latency(us) 00:14:29.946 [2024-11-20T08:27:17.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.946 [2024-11-20T08:27:17.507Z] =================================================================================================================== 00:14:29.946 [2024-11-20T08:27:17.507Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:29.946 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:14:29.946 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:14:29.946 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # echo 'killing process with pid 72486' 00:14:29.946 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # kill 72486 00:14:29.946 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@981 -- # wait 72486 00:14:30.206 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:30.206 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:30.206 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:14:30.206 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:30.206 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:14:30.206 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:30.206 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:30.206 rmmod nvme_tcp 00:14:30.206 rmmod nvme_fabrics 00:14:30.206 rmmod nvme_keyring 00:14:30.206 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:30.206 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:14:30.206 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:14:30.206 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72454 ']' 00:14:30.206 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72454 00:14:30.206 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' -z 72454 ']' 00:14:30.206 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # kill -0 72454 00:14:30.206 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # uname 00:14:30.206 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:14:30.206 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 72454 00:14:30.206 killing process with pid 72454 00:14:30.206 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:14:30.206 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:14:30.206 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # echo 'killing process with pid 72454' 00:14:30.206 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # kill 72454 00:14:30.206 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@981 -- # wait 72454 00:14:30.464 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:30.464 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:30.464 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:30.464 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:14:30.464 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:14:30.464 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:30.464 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:14:30.464 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:30.464 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:30.464 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:30.464 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:30.464 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:30.464 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:30.464 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:30.464 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:30.464 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:30.464 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:30.464 08:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:30.464 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:30.464 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:30.723 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:30.723 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:30.723 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:30.723 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.723 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:30.723 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.723 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:14:30.723 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.oYS173iy53 /tmp/tmp.Oym7EUxDrM /tmp/tmp.jQCv2Cn94U 00:14:30.723 00:14:30.723 real 1m27.365s 00:14:30.723 user 2m18.603s 00:14:30.723 sys 0m29.785s 00:14:30.723 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1133 -- # xtrace_disable 00:14:30.723 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:30.723 ************************************ 00:14:30.723 END TEST nvmf_tls 00:14:30.723 ************************************ 00:14:30.723 08:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:30.723 08:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:14:30.723 08:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1114 -- # xtrace_disable 00:14:30.723 08:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:30.723 ************************************ 00:14:30.723 START TEST nvmf_fips 00:14:30.723 ************************************ 00:14:30.723 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:30.723 * Looking for test storage... 00:14:30.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:30.723 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:14:30.723 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1638 -- # lcov --version 00:14:30.723 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:14:30.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.983 --rc genhtml_branch_coverage=1 00:14:30.983 --rc genhtml_function_coverage=1 00:14:30.983 --rc genhtml_legend=1 00:14:30.983 --rc geninfo_all_blocks=1 00:14:30.983 --rc geninfo_unexecuted_blocks=1 00:14:30.983 00:14:30.983 ' 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:14:30.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.983 --rc genhtml_branch_coverage=1 00:14:30.983 --rc genhtml_function_coverage=1 00:14:30.983 --rc genhtml_legend=1 00:14:30.983 --rc geninfo_all_blocks=1 00:14:30.983 --rc geninfo_unexecuted_blocks=1 00:14:30.983 00:14:30.983 ' 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:14:30.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.983 --rc genhtml_branch_coverage=1 00:14:30.983 --rc genhtml_function_coverage=1 00:14:30.983 --rc genhtml_legend=1 00:14:30.983 --rc geninfo_all_blocks=1 00:14:30.983 --rc geninfo_unexecuted_blocks=1 00:14:30.983 00:14:30.983 ' 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:14:30.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.983 --rc genhtml_branch_coverage=1 00:14:30.983 --rc genhtml_function_coverage=1 00:14:30.983 --rc genhtml_legend=1 00:14:30.983 --rc geninfo_all_blocks=1 00:14:30.983 --rc geninfo_unexecuted_blocks=1 00:14:30.983 00:14:30.983 ' 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.983 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:30.984 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # local es=0 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@657 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@643 -- # local arg=openssl 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@647 -- # type -t openssl 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@649 -- # type -P openssl 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@649 -- # arg=/usr/bin/openssl 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@649 -- # [[ -x /usr/bin/openssl ]] 00:14:30.984 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@658 -- # openssl md5 /dev/fd/62 00:14:31.243 Error setting digest 00:14:31.243 40F21D02E27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:14:31.243 40F21D02E27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:14:31.243 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@658 -- # es=1 00:14:31.243 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:14:31.243 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:14:31.243 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:14:31.243 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:14:31.243 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:31.243 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.243 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:31.243 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:31.243 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:31.243 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.243 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.243 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.243 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:31.243 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:31.244 Cannot find device "nvmf_init_br" 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:31.244 Cannot find device "nvmf_init_br2" 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:31.244 Cannot find device "nvmf_tgt_br" 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:31.244 Cannot find device "nvmf_tgt_br2" 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:31.244 Cannot find device "nvmf_init_br" 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:31.244 Cannot find device "nvmf_init_br2" 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:31.244 Cannot find device "nvmf_tgt_br" 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:31.244 Cannot find device "nvmf_tgt_br2" 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:31.244 Cannot find device "nvmf_br" 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:31.244 Cannot find device "nvmf_init_if" 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:31.244 Cannot find device "nvmf_init_if2" 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:31.244 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:31.244 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:31.244 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:31.503 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:31.503 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:31.503 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:31.503 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:31.503 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:31.503 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:31.503 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:31.503 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:31.503 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:31.503 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:31.503 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:31.503 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:31.503 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:31.503 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:31.503 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:31.503 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:14:31.503 00:14:31.503 --- 10.0.0.3 ping statistics --- 00:14:31.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.503 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:31.503 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:31.504 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:31.504 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.106 ms 00:14:31.504 00:14:31.504 --- 10.0.0.4 ping statistics --- 00:14:31.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.504 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:31.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:31.504 00:14:31.504 --- 10.0.0.1 ping statistics --- 00:14:31.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.504 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:31.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:14:31.504 00:14:31.504 --- 10.0.0.2 ping statistics --- 00:14:31.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.504 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72815 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72815 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # '[' -z 72815 ']' 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@843 -- # local max_retries=100 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@847 -- # xtrace_disable 00:14:31.504 08:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:31.504 [2024-11-20 08:27:19.033332] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:14:31.504 [2024-11-20 08:27:19.034075] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.763 [2024-11-20 08:27:19.188534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.763 [2024-11-20 08:27:19.268292] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.763 [2024-11-20 08:27:19.268370] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.763 [2024-11-20 08:27:19.268385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.763 [2024-11-20 08:27:19.268396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.763 [2024-11-20 08:27:19.268405] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.763 [2024-11-20 08:27:19.268950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.021 [2024-11-20 08:27:19.343015] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:32.585 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:14:32.585 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@871 -- # return 0 00:14:32.585 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:32.585 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@735 -- # xtrace_disable 00:14:32.585 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:32.585 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.585 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:14:32.585 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:32.585 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:14:32.585 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Iq5 00:14:32.585 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:32.585 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Iq5 00:14:32.585 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Iq5 00:14:32.585 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Iq5 00:14:32.585 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:32.843 [2024-11-20 08:27:20.316953] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.843 [2024-11-20 08:27:20.332860] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:32.843 [2024-11-20 08:27:20.333104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:32.843 malloc0 00:14:32.843 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:32.843 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72856 00:14:32.843 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:32.843 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72856 /var/tmp/bdevperf.sock 00:14:32.843 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # '[' -z 72856 ']' 00:14:32.843 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:32.843 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@843 -- # local max_retries=100 00:14:32.844 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:32.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:32.844 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@847 -- # xtrace_disable 00:14:32.844 08:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:33.101 [2024-11-20 08:27:20.487722] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:14:33.101 [2024-11-20 08:27:20.487838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72856 ] 00:14:33.101 [2024-11-20 08:27:20.638548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.359 [2024-11-20 08:27:20.696436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.359 [2024-11-20 08:27:20.753780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:33.925 08:27:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:14:33.925 08:27:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@871 -- # return 0 00:14:33.926 08:27:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Iq5 00:14:34.183 08:27:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:34.442 [2024-11-20 08:27:21.934774] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:34.700 TLSTESTn1 00:14:34.700 08:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:34.700 Running I/O for 10 seconds... 00:14:37.006 3719.00 IOPS, 14.53 MiB/s [2024-11-20T08:27:25.501Z] 3796.50 IOPS, 14.83 MiB/s [2024-11-20T08:27:26.436Z] 3855.00 IOPS, 15.06 MiB/s [2024-11-20T08:27:27.370Z] 3840.75 IOPS, 15.00 MiB/s [2024-11-20T08:27:28.305Z] 3809.00 IOPS, 14.88 MiB/s [2024-11-20T08:27:29.241Z] 3823.17 IOPS, 14.93 MiB/s [2024-11-20T08:27:30.176Z] 3841.57 IOPS, 15.01 MiB/s [2024-11-20T08:27:31.553Z] 3861.25 IOPS, 15.08 MiB/s [2024-11-20T08:27:32.491Z] 3861.44 IOPS, 15.08 MiB/s [2024-11-20T08:27:32.491Z] 3862.50 IOPS, 15.09 MiB/s 00:14:44.930 Latency(us) 00:14:44.930 [2024-11-20T08:27:32.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.930 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:44.930 Verification LBA range: start 0x0 length 0x2000 00:14:44.930 TLSTESTn1 : 10.02 3868.87 15.11 0.00 0.00 33033.16 4021.53 35508.60 00:14:44.930 [2024-11-20T08:27:32.491Z] =================================================================================================================== 00:14:44.930 [2024-11-20T08:27:32.491Z] Total : 3868.87 15.11 0.00 0.00 33033.16 4021.53 35508.60 00:14:44.930 { 00:14:44.930 "results": [ 00:14:44.930 { 00:14:44.930 "job": "TLSTESTn1", 00:14:44.930 "core_mask": "0x4", 00:14:44.930 "workload": "verify", 00:14:44.930 "status": "finished", 00:14:44.930 "verify_range": { 00:14:44.930 "start": 0, 00:14:44.930 "length": 8192 00:14:44.930 }, 00:14:44.930 "queue_depth": 128, 00:14:44.930 "io_size": 4096, 00:14:44.930 "runtime": 10.0161, 00:14:44.930 "iops": 3868.871117500824, 00:14:44.930 "mibps": 15.112777802737593, 00:14:44.930 "io_failed": 0, 00:14:44.930 "io_timeout": 0, 00:14:44.930 "avg_latency_us": 33033.16372870144, 00:14:44.930 "min_latency_us": 4021.5272727272727, 00:14:44.930 "max_latency_us": 35508.59636363637 00:14:44.930 } 00:14:44.930 ], 00:14:44.930 "core_count": 1 00:14:44.930 } 00:14:44.930 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:44.930 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:44.930 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@815 -- # type=--id 00:14:44.930 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # id=0 00:14:44.930 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@817 -- # '[' --id = --pid ']' 00:14:44.930 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:44.930 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # shm_files=nvmf_trace.0 00:14:44.930 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # [[ -z nvmf_trace.0 ]] 00:14:44.930 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # for n in $shm_files 00:14:44.930 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@828 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:44.930 nvmf_trace.0 00:14:44.930 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@830 -- # return 0 00:14:44.930 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72856 00:14:44.930 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' -z 72856 ']' 00:14:44.930 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@961 -- # kill -0 72856 00:14:44.930 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # uname 00:14:44.930 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:14:44.930 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 72856 00:14:44.930 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@963 -- # process_name=reactor_2 00:14:44.930 killing process with pid 72856 00:14:44.930 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # '[' reactor_2 = sudo ']' 00:14:44.930 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@975 -- # echo 'killing process with pid 72856' 00:14:44.930 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # kill 72856 00:14:44.930 Received shutdown signal, test time was about 10.000000 seconds 00:14:44.930 00:14:44.930 Latency(us) 00:14:44.930 [2024-11-20T08:27:32.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.930 [2024-11-20T08:27:32.491Z] =================================================================================================================== 00:14:44.930 [2024-11-20T08:27:32.491Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:44.930 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@981 -- # wait 72856 00:14:45.189 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:45.189 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:45.189 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:14:45.189 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:45.189 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:14:45.189 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:45.189 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:45.189 rmmod nvme_tcp 00:14:45.189 rmmod nvme_fabrics 00:14:45.189 rmmod nvme_keyring 00:14:45.189 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:45.189 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:14:45.189 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:14:45.189 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72815 ']' 00:14:45.189 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72815 00:14:45.189 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' -z 72815 ']' 00:14:45.189 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@961 -- # kill -0 72815 00:14:45.189 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # uname 00:14:45.189 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:14:45.189 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 72815 00:14:45.189 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:14:45.189 killing process with pid 72815 00:14:45.189 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:14:45.189 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@975 -- # echo 'killing process with pid 72815' 00:14:45.189 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # kill 72815 00:14:45.189 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@981 -- # wait 72815 00:14:45.448 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:45.448 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:45.448 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:45.448 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:14:45.448 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:14:45.448 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:45.448 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:14:45.448 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:45.448 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:45.448 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:45.449 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:45.449 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:45.449 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:45.449 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:45.449 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:45.449 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:45.449 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:45.449 08:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:45.708 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:45.708 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:45.708 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:45.708 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:45.708 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:45.708 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.708 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.708 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.708 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:14:45.708 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Iq5 00:14:45.708 00:14:45.708 real 0m14.969s 00:14:45.708 user 0m20.062s 00:14:45.708 sys 0m6.300s 00:14:45.708 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1133 -- # xtrace_disable 00:14:45.708 ************************************ 00:14:45.708 END TEST nvmf_fips 00:14:45.708 ************************************ 00:14:45.708 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:45.708 08:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:45.708 08:27:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:14:45.708 08:27:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1114 -- # xtrace_disable 00:14:45.708 08:27:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:45.708 ************************************ 00:14:45.708 START TEST nvmf_control_msg_list 00:14:45.708 ************************************ 00:14:45.709 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:45.968 * Looking for test storage... 00:14:45.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1638 -- # lcov --version 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:14:45.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.969 --rc genhtml_branch_coverage=1 00:14:45.969 --rc genhtml_function_coverage=1 00:14:45.969 --rc genhtml_legend=1 00:14:45.969 --rc geninfo_all_blocks=1 00:14:45.969 --rc geninfo_unexecuted_blocks=1 00:14:45.969 00:14:45.969 ' 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:14:45.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.969 --rc genhtml_branch_coverage=1 00:14:45.969 --rc genhtml_function_coverage=1 00:14:45.969 --rc genhtml_legend=1 00:14:45.969 --rc geninfo_all_blocks=1 00:14:45.969 --rc geninfo_unexecuted_blocks=1 00:14:45.969 00:14:45.969 ' 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:14:45.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.969 --rc genhtml_branch_coverage=1 00:14:45.969 --rc genhtml_function_coverage=1 00:14:45.969 --rc genhtml_legend=1 00:14:45.969 --rc geninfo_all_blocks=1 00:14:45.969 --rc geninfo_unexecuted_blocks=1 00:14:45.969 00:14:45.969 ' 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:14:45.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.969 --rc genhtml_branch_coverage=1 00:14:45.969 --rc genhtml_function_coverage=1 00:14:45.969 --rc genhtml_legend=1 00:14:45.969 --rc geninfo_all_blocks=1 00:14:45.969 --rc geninfo_unexecuted_blocks=1 00:14:45.969 00:14:45.969 ' 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:14:45.969 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:45.970 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:45.970 Cannot find device "nvmf_init_br" 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:45.970 Cannot find device "nvmf_init_br2" 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:45.970 Cannot find device "nvmf_tgt_br" 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:45.970 Cannot find device "nvmf_tgt_br2" 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:45.970 Cannot find device "nvmf_init_br" 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:45.970 Cannot find device "nvmf_init_br2" 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:14:45.970 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:45.970 Cannot find device "nvmf_tgt_br" 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:46.230 Cannot find device "nvmf_tgt_br2" 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:46.230 Cannot find device "nvmf_br" 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:46.230 Cannot find device "nvmf_init_if" 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:46.230 Cannot find device "nvmf_init_if2" 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:46.230 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:46.230 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:46.230 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:46.489 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:46.489 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:46.489 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:46.489 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:46.489 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:46.489 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:46.490 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:46.490 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:14:46.490 00:14:46.490 --- 10.0.0.3 ping statistics --- 00:14:46.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.490 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:46.490 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:46.490 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:14:46.490 00:14:46.490 --- 10.0.0.4 ping statistics --- 00:14:46.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.490 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:46.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:14:46.490 00:14:46.490 --- 10.0.0.1 ping statistics --- 00:14:46.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.490 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:46.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:14:46.490 00:14:46.490 --- 10.0.0.2 ping statistics --- 00:14:46.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.490 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73253 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73253 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # '[' -z 73253 ']' 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@843 -- # local max_retries=100 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@847 -- # xtrace_disable 00:14:46.490 08:27:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:46.490 [2024-11-20 08:27:33.916101] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:14:46.490 [2024-11-20 08:27:33.916192] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.749 [2024-11-20 08:27:34.067331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.749 [2024-11-20 08:27:34.129671] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.749 [2024-11-20 08:27:34.129724] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.749 [2024-11-20 08:27:34.129738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.749 [2024-11-20 08:27:34.129748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.749 [2024-11-20 08:27:34.129757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.749 [2024-11-20 08:27:34.130260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.749 [2024-11-20 08:27:34.189421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:46.749 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:14:46.749 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@871 -- # return 0 00:14:46.750 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:46.750 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@735 -- # xtrace_disable 00:14:46.750 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:46.750 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.750 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:46.750 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:46.750 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:14:46.750 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:46.750 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:46.750 [2024-11-20 08:27:34.306538] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:47.007 Malloc0 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:47.007 [2024-11-20 08:27:34.346212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73277 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73278 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73279 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:47.007 08:27:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73277 00:14:47.007 [2024-11-20 08:27:34.535004] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:47.007 [2024-11-20 08:27:34.535221] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:47.007 [2024-11-20 08:27:34.555121] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:48.384 Initializing NVMe Controllers 00:14:48.384 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:48.384 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:14:48.384 Initialization complete. Launching workers. 00:14:48.384 ======================================================== 00:14:48.384 Latency(us) 00:14:48.384 Device Information : IOPS MiB/s Average min max 00:14:48.384 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3433.00 13.41 290.92 201.76 1066.33 00:14:48.384 ======================================================== 00:14:48.384 Total : 3433.00 13.41 290.92 201.76 1066.33 00:14:48.384 00:14:48.384 Initializing NVMe Controllers 00:14:48.384 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:48.384 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:14:48.384 Initialization complete. Launching workers. 00:14:48.384 ======================================================== 00:14:48.384 Latency(us) 00:14:48.384 Device Information : IOPS MiB/s Average min max 00:14:48.384 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3435.99 13.42 290.75 203.13 996.06 00:14:48.384 ======================================================== 00:14:48.384 Total : 3435.99 13.42 290.75 203.13 996.06 00:14:48.384 00:14:48.384 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73278 00:14:48.384 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73279 00:14:48.384 Initializing NVMe Controllers 00:14:48.384 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:48.384 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:14:48.384 Initialization complete. Launching workers. 00:14:48.384 ======================================================== 00:14:48.384 Latency(us) 00:14:48.384 Device Information : IOPS MiB/s Average min max 00:14:48.384 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3499.00 13.67 285.43 112.06 1166.32 00:14:48.384 ======================================================== 00:14:48.384 Total : 3499.00 13.67 285.43 112.06 1166.32 00:14:48.384 00:14:48.384 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:48.384 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:14:48.384 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:48.384 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:14:48.384 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:48.384 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:14:48.384 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:48.384 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:48.384 rmmod nvme_tcp 00:14:48.384 rmmod nvme_fabrics 00:14:48.384 rmmod nvme_keyring 00:14:48.384 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:48.385 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:14:48.385 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:14:48.385 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73253 ']' 00:14:48.385 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73253 00:14:48.385 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' -z 73253 ']' 00:14:48.385 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@961 -- # kill -0 73253 00:14:48.385 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # uname 00:14:48.385 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:14:48.385 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 73253 00:14:48.385 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:14:48.385 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:14:48.385 killing process with pid 73253 00:14:48.385 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@975 -- # echo 'killing process with pid 73253' 00:14:48.385 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # kill 73253 00:14:48.385 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@981 -- # wait 73253 00:14:48.644 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:48.644 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:48.644 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:48.644 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:14:48.644 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:14:48.644 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:48.644 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:14:48.644 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:48.644 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:48.644 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:48.644 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:48.644 08:27:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:48.644 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:48.644 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:48.644 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:48.644 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:48.644 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:48.644 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:48.644 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:48.644 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:48.644 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:48.644 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:48.644 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:48.644 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.644 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.644 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.903 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:14:48.903 00:14:48.903 real 0m3.023s 00:14:48.903 user 0m4.869s 00:14:48.903 sys 0m1.361s 00:14:48.903 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1133 -- # xtrace_disable 00:14:48.903 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:48.903 ************************************ 00:14:48.903 END TEST nvmf_control_msg_list 00:14:48.903 ************************************ 00:14:48.903 08:27:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:48.903 08:27:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:14:48.903 08:27:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1114 -- # xtrace_disable 00:14:48.903 08:27:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:48.903 ************************************ 00:14:48.903 START TEST nvmf_wait_for_buf 00:14:48.903 ************************************ 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:48.904 * Looking for test storage... 00:14:48.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1638 -- # lcov --version 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:48.904 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:14:49.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.164 --rc genhtml_branch_coverage=1 00:14:49.164 --rc genhtml_function_coverage=1 00:14:49.164 --rc genhtml_legend=1 00:14:49.164 --rc geninfo_all_blocks=1 00:14:49.164 --rc geninfo_unexecuted_blocks=1 00:14:49.164 00:14:49.164 ' 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:14:49.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.164 --rc genhtml_branch_coverage=1 00:14:49.164 --rc genhtml_function_coverage=1 00:14:49.164 --rc genhtml_legend=1 00:14:49.164 --rc geninfo_all_blocks=1 00:14:49.164 --rc geninfo_unexecuted_blocks=1 00:14:49.164 00:14:49.164 ' 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:14:49.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.164 --rc genhtml_branch_coverage=1 00:14:49.164 --rc genhtml_function_coverage=1 00:14:49.164 --rc genhtml_legend=1 00:14:49.164 --rc geninfo_all_blocks=1 00:14:49.164 --rc geninfo_unexecuted_blocks=1 00:14:49.164 00:14:49.164 ' 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:14:49.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.164 --rc genhtml_branch_coverage=1 00:14:49.164 --rc genhtml_function_coverage=1 00:14:49.164 --rc genhtml_legend=1 00:14:49.164 --rc geninfo_all_blocks=1 00:14:49.164 --rc geninfo_unexecuted_blocks=1 00:14:49.164 00:14:49.164 ' 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.164 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:49.165 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:49.165 Cannot find device "nvmf_init_br" 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:49.165 Cannot find device "nvmf_init_br2" 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:49.165 Cannot find device "nvmf_tgt_br" 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:49.165 Cannot find device "nvmf_tgt_br2" 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:49.165 Cannot find device "nvmf_init_br" 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:49.165 Cannot find device "nvmf_init_br2" 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:49.165 Cannot find device "nvmf_tgt_br" 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:49.165 Cannot find device "nvmf_tgt_br2" 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:49.165 Cannot find device "nvmf_br" 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:49.165 Cannot find device "nvmf_init_if" 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:49.165 Cannot find device "nvmf_init_if2" 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:49.165 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:49.165 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:49.165 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:49.424 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:49.424 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:14:49.424 00:14:49.424 --- 10.0.0.3 ping statistics --- 00:14:49.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.424 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:49.424 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:49.424 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.115 ms 00:14:49.424 00:14:49.424 --- 10.0.0.4 ping statistics --- 00:14:49.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.424 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:14:49.424 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:49.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:14:49.424 00:14:49.424 --- 10.0.0.1 ping statistics --- 00:14:49.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.424 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:14:49.425 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:49.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:14:49.425 00:14:49.425 --- 10.0.0.2 ping statistics --- 00:14:49.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.425 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:14:49.425 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.425 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:14:49.425 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:49.425 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.425 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:49.425 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:49.425 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.425 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:49.425 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:49.425 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:14:49.425 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:49.425 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:49.425 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:49.425 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73521 00:14:49.425 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:14:49.425 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73521 00:14:49.425 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # '[' -z 73521 ']' 00:14:49.425 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.425 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@843 -- # local max_retries=100 00:14:49.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.425 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.425 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@847 -- # xtrace_disable 00:14:49.425 08:27:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:49.425 [2024-11-20 08:27:36.963081] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:14:49.425 [2024-11-20 08:27:36.963185] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.683 [2024-11-20 08:27:37.110353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.683 [2024-11-20 08:27:37.158537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.683 [2024-11-20 08:27:37.158604] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.683 [2024-11-20 08:27:37.158631] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.683 [2024-11-20 08:27:37.158639] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.683 [2024-11-20 08:27:37.158646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.683 [2024-11-20 08:27:37.159068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.683 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:14:49.683 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@871 -- # return 0 00:14:49.683 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:49.683 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@735 -- # xtrace_disable 00:14:49.683 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:49.945 [2024-11-20 08:27:37.322652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:49.945 Malloc0 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:49.945 [2024-11-20 08:27:37.390234] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:49.945 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:49.946 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:49.946 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:49.946 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:49.946 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:49.946 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:49.946 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:49.946 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:49.946 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:49.946 [2024-11-20 08:27:37.418321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:49.946 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:49.946 08:27:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:50.204 [2024-11-20 08:27:37.625011] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:51.581 Initializing NVMe Controllers 00:14:51.581 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:51.581 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:14:51.581 Initialization complete. Launching workers. 00:14:51.581 ======================================================== 00:14:51.581 Latency(us) 00:14:51.581 Device Information : IOPS MiB/s Average min max 00:14:51.581 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 500.96 62.62 8000.03 7848.24 8151.11 00:14:51.581 ======================================================== 00:14:51.581 Total : 500.96 62.62 8000.03 7848.24 8151.11 00:14:51.581 00:14:51.581 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:14:51.581 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:14:51.581 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:51.581 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:51.581 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:51.581 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:14:51.581 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:14:51.581 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:51.581 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:14:51.581 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:51.581 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:14:51.581 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:51.581 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:14:51.581 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:51.581 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:51.581 rmmod nvme_tcp 00:14:51.581 rmmod nvme_fabrics 00:14:51.581 rmmod nvme_keyring 00:14:51.581 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:51.581 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:14:51.581 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:14:51.581 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73521 ']' 00:14:51.581 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73521 00:14:51.581 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' -z 73521 ']' 00:14:51.581 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@961 -- # kill -0 73521 00:14:51.581 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # uname 00:14:51.581 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:14:51.581 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 73521 00:14:51.581 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:14:51.581 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:14:51.581 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@975 -- # echo 'killing process with pid 73521' 00:14:51.581 killing process with pid 73521 00:14:51.581 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # kill 73521 00:14:51.581 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@981 -- # wait 73521 00:14:51.845 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:51.845 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:51.845 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:51.845 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:14:51.845 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:14:51.845 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:51.845 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:14:51.845 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:51.846 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:51.846 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:51.846 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:51.846 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:51.846 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:51.846 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:51.846 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:51.846 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:51.846 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:51.846 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:52.107 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:52.107 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:52.107 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.107 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.107 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:52.107 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.107 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.107 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.107 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:14:52.107 00:14:52.107 real 0m3.270s 00:14:52.107 user 0m2.618s 00:14:52.107 sys 0m0.801s 00:14:52.107 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1133 -- # xtrace_disable 00:14:52.107 ************************************ 00:14:52.107 END TEST nvmf_wait_for_buf 00:14:52.107 ************************************ 00:14:52.107 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:52.107 08:27:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:14:52.107 08:27:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:14:52.107 08:27:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:14:52.107 08:27:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:14:52.107 08:27:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1114 -- # xtrace_disable 00:14:52.107 08:27:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:52.107 ************************************ 00:14:52.107 START TEST nvmf_nsid 00:14:52.107 ************************************ 00:14:52.107 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:14:52.367 * Looking for test storage... 00:14:52.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1638 -- # lcov --version 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:14:52.367 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:14:52.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.368 --rc genhtml_branch_coverage=1 00:14:52.368 --rc genhtml_function_coverage=1 00:14:52.368 --rc genhtml_legend=1 00:14:52.368 --rc geninfo_all_blocks=1 00:14:52.368 --rc geninfo_unexecuted_blocks=1 00:14:52.368 00:14:52.368 ' 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:14:52.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.368 --rc genhtml_branch_coverage=1 00:14:52.368 --rc genhtml_function_coverage=1 00:14:52.368 --rc genhtml_legend=1 00:14:52.368 --rc geninfo_all_blocks=1 00:14:52.368 --rc geninfo_unexecuted_blocks=1 00:14:52.368 00:14:52.368 ' 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:14:52.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.368 --rc genhtml_branch_coverage=1 00:14:52.368 --rc genhtml_function_coverage=1 00:14:52.368 --rc genhtml_legend=1 00:14:52.368 --rc geninfo_all_blocks=1 00:14:52.368 --rc geninfo_unexecuted_blocks=1 00:14:52.368 00:14:52.368 ' 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:14:52.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.368 --rc genhtml_branch_coverage=1 00:14:52.368 --rc genhtml_function_coverage=1 00:14:52.368 --rc genhtml_legend=1 00:14:52.368 --rc geninfo_all_blocks=1 00:14:52.368 --rc geninfo_unexecuted_blocks=1 00:14:52.368 00:14:52.368 ' 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:52.368 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:52.368 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:52.369 Cannot find device "nvmf_init_br" 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:52.369 Cannot find device "nvmf_init_br2" 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:52.369 Cannot find device "nvmf_tgt_br" 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.369 Cannot find device "nvmf_tgt_br2" 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:52.369 Cannot find device "nvmf_init_br" 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:52.369 Cannot find device "nvmf_init_br2" 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:52.369 Cannot find device "nvmf_tgt_br" 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:14:52.369 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:52.628 Cannot find device "nvmf_tgt_br2" 00:14:52.628 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:14:52.628 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:52.628 Cannot find device "nvmf_br" 00:14:52.628 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:14:52.628 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:52.628 Cannot find device "nvmf_init_if" 00:14:52.628 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:14:52.628 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:52.628 Cannot find device "nvmf_init_if2" 00:14:52.628 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:14:52.628 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.628 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:14:52.628 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.628 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:14:52.628 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:52.628 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:52.628 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:52.628 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:52.628 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:52.628 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:52.628 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:52.628 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:52.628 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:52.628 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:52.628 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:52.628 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:52.628 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:52.628 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:52.628 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:52.628 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:52.628 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:52.628 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:52.628 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:52.628 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:52.628 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:52.628 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:52.628 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:52.628 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:52.897 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:52.897 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:52.897 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:52.897 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:52.897 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:52.897 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:52.897 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:52.897 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:52.897 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:52.897 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:52.897 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:14:52.897 00:14:52.897 --- 10.0.0.3 ping statistics --- 00:14:52.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.897 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:52.897 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:52.897 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:52.897 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:14:52.897 00:14:52.898 --- 10.0.0.4 ping statistics --- 00:14:52.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.898 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:52.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:52.898 00:14:52.898 --- 10.0.0.1 ping statistics --- 00:14:52.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.898 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:52.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:14:52.898 00:14:52.898 --- 10.0.0.2 ping statistics --- 00:14:52.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.898 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73786 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73786 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # '[' -z 73786 ']' 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@843 -- # local max_retries=100 00:14:52.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@847 -- # xtrace_disable 00:14:52.898 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:52.898 [2024-11-20 08:27:40.351431] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:14:52.898 [2024-11-20 08:27:40.351567] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.162 [2024-11-20 08:27:40.499297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.162 [2024-11-20 08:27:40.557436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.162 [2024-11-20 08:27:40.557499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.162 [2024-11-20 08:27:40.557510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.162 [2024-11-20 08:27:40.557517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.162 [2024-11-20 08:27:40.557524] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.162 [2024-11-20 08:27:40.558007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.162 [2024-11-20 08:27:40.613304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:53.162 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:14:53.162 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@871 -- # return 0 00:14:53.162 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:53.162 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@735 -- # xtrace_disable 00:14:53.162 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73805 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=c64345fa-c230-40dc-8bfe-6bd20d948563 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=0b8e39be-b4ea-4763-86f9-49bf1d529bf0 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=d1b6634d-38f4-48be-a332-2ffb4b84954e 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:53.421 null0 00:14:53.421 null1 00:14:53.421 null2 00:14:53.421 [2024-11-20 08:27:40.788344] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.421 [2024-11-20 08:27:40.799692] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:14:53.421 [2024-11-20 08:27:40.799786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73805 ] 00:14:53.421 [2024-11-20 08:27:40.812507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73805 /var/tmp/tgt2.sock 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # '[' -z 73805 ']' 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/tgt2.sock 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@843 -- # local max_retries=100 00:14:53.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@847 -- # xtrace_disable 00:14:53.421 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:53.421 [2024-11-20 08:27:40.951922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.680 [2024-11-20 08:27:41.039253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.680 [2024-11-20 08:27:41.155360] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:53.939 08:27:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:14:53.939 08:27:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@871 -- # return 0 00:14:53.939 08:27:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:14:54.506 [2024-11-20 08:27:41.785954] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.506 [2024-11-20 08:27:41.802043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:14:54.506 nvme0n1 nvme0n2 00:14:54.506 nvme1n1 00:14:54.506 08:27:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:14:54.506 08:27:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:14:54.506 08:27:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:14:54.506 08:27:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:14:54.507 08:27:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:14:54.507 08:27:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:14:54.507 08:27:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:14:54.507 08:27:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:14:54.507 08:27:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:14:54.507 08:27:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:14:54.507 08:27:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # local i=0 00:14:54.507 08:27:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # lsblk -l -o NAME 00:14:54.507 08:27:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # grep -q -w nvme0n1 00:14:54.507 08:27:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # '[' 0 -lt 15 ']' 00:14:54.507 08:27:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1245 -- # i=1 00:14:54.507 08:27:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # sleep 1 00:14:55.488 08:27:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # lsblk -l -o NAME 00:14:55.488 08:27:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # grep -q -w nvme0n1 00:14:55.488 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1249 -- # lsblk -l -o NAME 00:14:55.488 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1249 -- # grep -q -w nvme0n1 00:14:55.488 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1253 -- # return 0 00:14:55.488 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid c64345fa-c230-40dc-8bfe-6bd20d948563 00:14:55.488 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:55.489 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:14:55.489 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:14:55.489 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c64345fac23040dc8bfe6bd20d948563 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C64345FAC23040DC8BFE6BD20D948563 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ C64345FAC23040DC8BFE6BD20D948563 == \C\6\4\3\4\5\F\A\C\2\3\0\4\0\D\C\8\B\F\E\6\B\D\2\0\D\9\4\8\5\6\3 ]] 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # local i=0 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # lsblk -l -o NAME 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # grep -q -w nvme0n2 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1249 -- # lsblk -l -o NAME 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1249 -- # grep -q -w nvme0n2 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1253 -- # return 0 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 0b8e39be-b4ea-4763-86f9-49bf1d529bf0 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0b8e39beb4ea476386f949bf1d529bf0 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0B8E39BEB4EA476386F949BF1D529BF0 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 0B8E39BEB4EA476386F949BF1D529BF0 == \0\B\8\E\3\9\B\E\B\4\E\A\4\7\6\3\8\6\F\9\4\9\B\F\1\D\5\2\9\B\F\0 ]] 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # local i=0 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # lsblk -l -o NAME 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # grep -q -w nvme0n3 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1249 -- # lsblk -l -o NAME 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1249 -- # grep -q -w nvme0n3 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1253 -- # return 0 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid d1b6634d-38f4-48be-a332-2ffb4b84954e 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d1b6634d38f448bea3322ffb4b84954e 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D1B6634D38F448BEA3322FFB4B84954E 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ D1B6634D38F448BEA3322FFB4B84954E == \D\1\B\6\6\3\4\D\3\8\F\4\4\8\B\E\A\3\3\2\2\F\F\B\4\B\8\4\9\5\4\E ]] 00:14:55.748 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:14:56.010 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:14:56.010 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:14:56.010 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73805 00:14:56.010 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' -z 73805 ']' 00:14:56.010 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@961 -- # kill -0 73805 00:14:56.010 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # uname 00:14:56.010 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:14:56.010 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 73805 00:14:56.010 killing process with pid 73805 00:14:56.010 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:14:56.010 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:14:56.010 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@975 -- # echo 'killing process with pid 73805' 00:14:56.010 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # kill 73805 00:14:56.010 08:27:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@981 -- # wait 73805 00:14:56.578 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:14:56.578 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:56.578 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:14:56.578 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:56.578 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:14:56.578 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:56.578 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:56.578 rmmod nvme_tcp 00:14:56.578 rmmod nvme_fabrics 00:14:56.578 rmmod nvme_keyring 00:14:56.578 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:56.578 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:14:56.578 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:14:56.578 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73786 ']' 00:14:56.578 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73786 00:14:56.578 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' -z 73786 ']' 00:14:56.578 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@961 -- # kill -0 73786 00:14:56.578 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # uname 00:14:56.578 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:14:56.578 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 73786 00:14:56.837 killing process with pid 73786 00:14:56.837 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:14:56.837 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:14:56.837 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@975 -- # echo 'killing process with pid 73786' 00:14:56.837 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # kill 73786 00:14:56.837 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@981 -- # wait 73786 00:14:56.837 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:56.837 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:56.837 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:56.837 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:14:56.837 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:14:56.837 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:56.837 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:14:56.837 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:56.837 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:56.837 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:56.837 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:57.096 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:57.096 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:57.096 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:57.096 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:57.096 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:57.096 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:57.096 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:57.096 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:57.096 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:57.096 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:57.096 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:57.096 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:57.096 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.096 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:57.096 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.096 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:14:57.096 00:14:57.096 real 0m5.054s 00:14:57.096 user 0m7.262s 00:14:57.096 sys 0m1.855s 00:14:57.096 ************************************ 00:14:57.096 END TEST nvmf_nsid 00:14:57.096 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1133 -- # xtrace_disable 00:14:57.096 08:27:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:57.096 ************************************ 00:14:57.356 08:27:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:57.356 00:14:57.356 real 5m13.375s 00:14:57.356 user 10m51.984s 00:14:57.356 sys 1m13.076s 00:14:57.356 08:27:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1133 -- # xtrace_disable 00:14:57.356 08:27:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:57.356 ************************************ 00:14:57.356 END TEST nvmf_target_extra 00:14:57.356 ************************************ 00:14:57.356 08:27:44 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:57.356 08:27:44 nvmf_tcp -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:14:57.356 08:27:44 nvmf_tcp -- common/autotest_common.sh@1114 -- # xtrace_disable 00:14:57.356 08:27:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:57.356 ************************************ 00:14:57.356 START TEST nvmf_host 00:14:57.356 ************************************ 00:14:57.356 08:27:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:57.356 * Looking for test storage... 00:14:57.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:57.356 08:27:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:14:57.356 08:27:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:14:57.356 08:27:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1638 -- # lcov --version 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:14:57.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.616 --rc genhtml_branch_coverage=1 00:14:57.616 --rc genhtml_function_coverage=1 00:14:57.616 --rc genhtml_legend=1 00:14:57.616 --rc geninfo_all_blocks=1 00:14:57.616 --rc geninfo_unexecuted_blocks=1 00:14:57.616 00:14:57.616 ' 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:14:57.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.616 --rc genhtml_branch_coverage=1 00:14:57.616 --rc genhtml_function_coverage=1 00:14:57.616 --rc genhtml_legend=1 00:14:57.616 --rc geninfo_all_blocks=1 00:14:57.616 --rc geninfo_unexecuted_blocks=1 00:14:57.616 00:14:57.616 ' 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:14:57.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.616 --rc genhtml_branch_coverage=1 00:14:57.616 --rc genhtml_function_coverage=1 00:14:57.616 --rc genhtml_legend=1 00:14:57.616 --rc geninfo_all_blocks=1 00:14:57.616 --rc geninfo_unexecuted_blocks=1 00:14:57.616 00:14:57.616 ' 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:14:57.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.616 --rc genhtml_branch_coverage=1 00:14:57.616 --rc genhtml_function_coverage=1 00:14:57.616 --rc genhtml_legend=1 00:14:57.616 --rc geninfo_all_blocks=1 00:14:57.616 --rc geninfo_unexecuted_blocks=1 00:14:57.616 00:14:57.616 ' 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.616 08:27:44 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:14:57.617 08:27:44 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.617 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:14:57.617 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:57.617 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:57.617 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:57.617 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.617 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.617 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:57.617 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:57.617 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:57.617 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:57.617 08:27:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:57.617 08:27:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:57.617 08:27:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:14:57.617 08:27:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:14:57.617 08:27:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:57.617 08:27:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:14:57.617 08:27:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1114 -- # xtrace_disable 00:14:57.617 08:27:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:57.617 ************************************ 00:14:57.617 START TEST nvmf_identify 00:14:57.617 ************************************ 00:14:57.617 08:27:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:57.617 * Looking for test storage... 00:14:57.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:57.617 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:14:57.617 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1638 -- # lcov --version 00:14:57.617 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:14:57.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.877 --rc genhtml_branch_coverage=1 00:14:57.877 --rc genhtml_function_coverage=1 00:14:57.877 --rc genhtml_legend=1 00:14:57.877 --rc geninfo_all_blocks=1 00:14:57.877 --rc geninfo_unexecuted_blocks=1 00:14:57.877 00:14:57.877 ' 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:14:57.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.877 --rc genhtml_branch_coverage=1 00:14:57.877 --rc genhtml_function_coverage=1 00:14:57.877 --rc genhtml_legend=1 00:14:57.877 --rc geninfo_all_blocks=1 00:14:57.877 --rc geninfo_unexecuted_blocks=1 00:14:57.877 00:14:57.877 ' 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:14:57.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.877 --rc genhtml_branch_coverage=1 00:14:57.877 --rc genhtml_function_coverage=1 00:14:57.877 --rc genhtml_legend=1 00:14:57.877 --rc geninfo_all_blocks=1 00:14:57.877 --rc geninfo_unexecuted_blocks=1 00:14:57.877 00:14:57.877 ' 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:14:57.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.877 --rc genhtml_branch_coverage=1 00:14:57.877 --rc genhtml_function_coverage=1 00:14:57.877 --rc genhtml_legend=1 00:14:57.877 --rc geninfo_all_blocks=1 00:14:57.877 --rc geninfo_unexecuted_blocks=1 00:14:57.877 00:14:57.877 ' 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.877 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:57.878 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:57.878 Cannot find device "nvmf_init_br" 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:57.878 Cannot find device "nvmf_init_br2" 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:57.878 Cannot find device "nvmf_tgt_br" 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:57.878 Cannot find device "nvmf_tgt_br2" 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:57.878 Cannot find device "nvmf_init_br" 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:57.878 Cannot find device "nvmf_init_br2" 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:57.878 Cannot find device "nvmf_tgt_br" 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:57.878 Cannot find device "nvmf_tgt_br2" 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:57.878 Cannot find device "nvmf_br" 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:57.878 Cannot find device "nvmf_init_if" 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:57.878 Cannot find device "nvmf_init_if2" 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:57.878 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:57.878 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:57.878 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:58.137 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:58.137 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:58.137 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:58.137 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:58.137 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:58.137 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:58.137 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:58.137 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:58.137 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:58.137 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:58.137 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:58.137 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:58.137 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:58.137 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:58.137 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:58.137 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:58.137 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:58.137 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:58.137 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:58.137 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:58.138 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:58.138 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:58.138 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:58.138 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:58.138 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:58.138 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:58.138 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:58.138 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:58.138 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:58.138 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:14:58.138 00:14:58.138 --- 10.0.0.3 ping statistics --- 00:14:58.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.138 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:14:58.138 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:58.138 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:58.138 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:14:58.138 00:14:58.138 --- 10.0.0.4 ping statistics --- 00:14:58.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.138 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:58.138 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:58.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:58.138 00:14:58.138 --- 10.0.0.1 ping statistics --- 00:14:58.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.138 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:58.138 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:58.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:14:58.138 00:14:58.138 --- 10.0.0.2 ping statistics --- 00:14:58.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.138 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:14:58.138 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.138 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:14:58.138 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:58.138 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.138 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:58.138 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:58.138 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.138 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:58.138 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:58.397 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:58.397 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:58.397 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:58.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.397 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74177 00:14:58.397 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:58.397 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:58.397 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74177 00:14:58.397 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # '[' -z 74177 ']' 00:14:58.397 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.397 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@843 -- # local max_retries=100 00:14:58.397 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.397 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@847 -- # xtrace_disable 00:14:58.397 08:27:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:58.397 [2024-11-20 08:27:45.791940] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:14:58.398 [2024-11-20 08:27:45.792052] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.398 [2024-11-20 08:27:45.947463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:58.656 [2024-11-20 08:27:46.012062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.656 [2024-11-20 08:27:46.012117] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.657 [2024-11-20 08:27:46.012131] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:58.657 [2024-11-20 08:27:46.012151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:58.657 [2024-11-20 08:27:46.012160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.657 [2024-11-20 08:27:46.013498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.657 [2024-11-20 08:27:46.013556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.657 [2024-11-20 08:27:46.013700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:58.657 [2024-11-20 08:27:46.013711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.657 [2024-11-20 08:27:46.075452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:58.657 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:14:58.657 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@871 -- # return 0 00:14:58.657 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:58.657 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:58.657 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:58.657 [2024-11-20 08:27:46.153608] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.657 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:58.657 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:58.657 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@735 -- # xtrace_disable 00:14:58.657 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:58.657 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:58.657 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:58.657 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:58.915 Malloc0 00:14:58.915 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:58.915 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:58.915 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:58.915 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:58.915 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:58.915 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:58.915 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:58.915 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:58.915 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:58.915 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:58.915 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:58.915 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:58.915 [2024-11-20 08:27:46.269040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:58.915 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:58.915 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:58.915 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:58.915 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:58.915 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:58.915 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:58.915 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:58.915 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:58.915 [ 00:14:58.915 { 00:14:58.915 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:58.915 "subtype": "Discovery", 00:14:58.915 "listen_addresses": [ 00:14:58.915 { 00:14:58.915 "trtype": "TCP", 00:14:58.915 "adrfam": "IPv4", 00:14:58.915 "traddr": "10.0.0.3", 00:14:58.915 "trsvcid": "4420" 00:14:58.915 } 00:14:58.915 ], 00:14:58.915 "allow_any_host": true, 00:14:58.915 "hosts": [] 00:14:58.915 }, 00:14:58.915 { 00:14:58.915 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:58.915 "subtype": "NVMe", 00:14:58.916 "listen_addresses": [ 00:14:58.916 { 00:14:58.916 "trtype": "TCP", 00:14:58.916 "adrfam": "IPv4", 00:14:58.916 "traddr": "10.0.0.3", 00:14:58.916 "trsvcid": "4420" 00:14:58.916 } 00:14:58.916 ], 00:14:58.916 "allow_any_host": true, 00:14:58.916 "hosts": [], 00:14:58.916 "serial_number": "SPDK00000000000001", 00:14:58.916 "model_number": "SPDK bdev Controller", 00:14:58.916 "max_namespaces": 32, 00:14:58.916 "min_cntlid": 1, 00:14:58.916 "max_cntlid": 65519, 00:14:58.916 "namespaces": [ 00:14:58.916 { 00:14:58.916 "nsid": 1, 00:14:58.916 "bdev_name": "Malloc0", 00:14:58.916 "name": "Malloc0", 00:14:58.916 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:58.916 "eui64": "ABCDEF0123456789", 00:14:58.916 "uuid": "c09fd871-ff3d-4e6e-8f4e-befaa1c3d973" 00:14:58.916 } 00:14:58.916 ] 00:14:58.916 } 00:14:58.916 ] 00:14:58.916 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:58.916 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:58.916 [2024-11-20 08:27:46.319354] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:14:58.916 [2024-11-20 08:27:46.319412] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74209 ] 00:14:59.177 [2024-11-20 08:27:46.481056] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:14:59.177 [2024-11-20 08:27:46.481166] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:59.177 [2024-11-20 08:27:46.481174] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:59.177 [2024-11-20 08:27:46.481191] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:59.177 [2024-11-20 08:27:46.481205] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:59.177 [2024-11-20 08:27:46.481621] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:14:59.177 [2024-11-20 08:27:46.481720] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc30750 0 00:14:59.177 [2024-11-20 08:27:46.496940] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:59.177 [2024-11-20 08:27:46.497003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:59.177 [2024-11-20 08:27:46.497010] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:59.177 [2024-11-20 08:27:46.497014] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:59.177 [2024-11-20 08:27:46.497072] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.177 [2024-11-20 08:27:46.497079] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.177 [2024-11-20 08:27:46.497083] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc30750) 00:14:59.177 [2024-11-20 08:27:46.497110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:59.177 [2024-11-20 08:27:46.497181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94740, cid 0, qid 0 00:14:59.177 [2024-11-20 08:27:46.504902] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.177 [2024-11-20 08:27:46.504932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.177 [2024-11-20 08:27:46.504955] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.177 [2024-11-20 08:27:46.504961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94740) on tqpair=0xc30750 00:14:59.177 [2024-11-20 08:27:46.504980] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:59.177 [2024-11-20 08:27:46.504993] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:14:59.177 [2024-11-20 08:27:46.505000] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:14:59.177 [2024-11-20 08:27:46.505021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.177 [2024-11-20 08:27:46.505027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.177 [2024-11-20 08:27:46.505031] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc30750) 00:14:59.177 [2024-11-20 08:27:46.505047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.177 [2024-11-20 08:27:46.505093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94740, cid 0, qid 0 00:14:59.177 [2024-11-20 08:27:46.505187] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.177 [2024-11-20 08:27:46.505195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.177 [2024-11-20 08:27:46.505199] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.177 [2024-11-20 08:27:46.505204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94740) on tqpair=0xc30750 00:14:59.177 [2024-11-20 08:27:46.505210] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:14:59.177 [2024-11-20 08:27:46.505218] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:14:59.177 [2024-11-20 08:27:46.505227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.177 [2024-11-20 08:27:46.505231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.177 [2024-11-20 08:27:46.505235] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc30750) 00:14:59.177 [2024-11-20 08:27:46.505244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.177 [2024-11-20 08:27:46.505266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94740, cid 0, qid 0 00:14:59.177 [2024-11-20 08:27:46.505353] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.177 [2024-11-20 08:27:46.505360] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.177 [2024-11-20 08:27:46.505365] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.177 [2024-11-20 08:27:46.505369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94740) on tqpair=0xc30750 00:14:59.177 [2024-11-20 08:27:46.505376] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:14:59.177 [2024-11-20 08:27:46.505385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:14:59.177 [2024-11-20 08:27:46.505394] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.177 [2024-11-20 08:27:46.505399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.177 [2024-11-20 08:27:46.505403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc30750) 00:14:59.177 [2024-11-20 08:27:46.505411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.177 [2024-11-20 08:27:46.505432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94740, cid 0, qid 0 00:14:59.177 [2024-11-20 08:27:46.505487] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.177 [2024-11-20 08:27:46.505494] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.177 [2024-11-20 08:27:46.505498] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.177 [2024-11-20 08:27:46.505502] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94740) on tqpair=0xc30750 00:14:59.177 [2024-11-20 08:27:46.505509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:59.177 [2024-11-20 08:27:46.505520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.177 [2024-11-20 08:27:46.505525] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.177 [2024-11-20 08:27:46.505529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc30750) 00:14:59.177 [2024-11-20 08:27:46.505537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.177 [2024-11-20 08:27:46.505557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94740, cid 0, qid 0 00:14:59.177 [2024-11-20 08:27:46.505623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.177 [2024-11-20 08:27:46.505630] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.177 [2024-11-20 08:27:46.505634] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.177 [2024-11-20 08:27:46.505639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94740) on tqpair=0xc30750 00:14:59.177 [2024-11-20 08:27:46.505644] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:14:59.177 [2024-11-20 08:27:46.505650] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:14:59.177 [2024-11-20 08:27:46.505658] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:59.177 [2024-11-20 08:27:46.505771] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:14:59.177 [2024-11-20 08:27:46.505777] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:59.177 [2024-11-20 08:27:46.505787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.177 [2024-11-20 08:27:46.505792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.177 [2024-11-20 08:27:46.505796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc30750) 00:14:59.177 [2024-11-20 08:27:46.505804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.177 [2024-11-20 08:27:46.505827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94740, cid 0, qid 0 00:14:59.177 [2024-11-20 08:27:46.505905] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.177 [2024-11-20 08:27:46.505914] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.177 [2024-11-20 08:27:46.505918] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.177 [2024-11-20 08:27:46.505922] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94740) on tqpair=0xc30750 00:14:59.177 [2024-11-20 08:27:46.505928] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:59.177 [2024-11-20 08:27:46.505939] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.177 [2024-11-20 08:27:46.505944] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.177 [2024-11-20 08:27:46.505948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc30750) 00:14:59.177 [2024-11-20 08:27:46.505956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.177 [2024-11-20 08:27:46.505977] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94740, cid 0, qid 0 00:14:59.178 [2024-11-20 08:27:46.506036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.178 [2024-11-20 08:27:46.506043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.178 [2024-11-20 08:27:46.506047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.506051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94740) on tqpair=0xc30750 00:14:59.178 [2024-11-20 08:27:46.506059] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:59.178 [2024-11-20 08:27:46.506065] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:14:59.178 [2024-11-20 08:27:46.506073] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:14:59.178 [2024-11-20 08:27:46.506097] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:14:59.178 [2024-11-20 08:27:46.506111] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.506115] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc30750) 00:14:59.178 [2024-11-20 08:27:46.506124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.178 [2024-11-20 08:27:46.506145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94740, cid 0, qid 0 00:14:59.178 [2024-11-20 08:27:46.506249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:59.178 [2024-11-20 08:27:46.506257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:59.178 [2024-11-20 08:27:46.506262] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.506266] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc30750): datao=0, datal=4096, cccid=0 00:14:59.178 [2024-11-20 08:27:46.506271] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc94740) on tqpair(0xc30750): expected_datao=0, payload_size=4096 00:14:59.178 [2024-11-20 08:27:46.506277] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.506313] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.506319] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.506332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.178 [2024-11-20 08:27:46.506339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.178 [2024-11-20 08:27:46.506343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.506348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94740) on tqpair=0xc30750 00:14:59.178 [2024-11-20 08:27:46.506358] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:14:59.178 [2024-11-20 08:27:46.506364] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:14:59.178 [2024-11-20 08:27:46.506368] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:14:59.178 [2024-11-20 08:27:46.506374] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:14:59.178 [2024-11-20 08:27:46.506379] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:14:59.178 [2024-11-20 08:27:46.506385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:14:59.178 [2024-11-20 08:27:46.506400] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:14:59.178 [2024-11-20 08:27:46.506409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.506414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.506418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc30750) 00:14:59.178 [2024-11-20 08:27:46.506427] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:59.178 [2024-11-20 08:27:46.506450] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94740, cid 0, qid 0 00:14:59.178 [2024-11-20 08:27:46.506520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.178 [2024-11-20 08:27:46.506528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.178 [2024-11-20 08:27:46.506532] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.506536] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94740) on tqpair=0xc30750 00:14:59.178 [2024-11-20 08:27:46.506550] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.506555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.506559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc30750) 00:14:59.178 [2024-11-20 08:27:46.506566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.178 [2024-11-20 08:27:46.506574] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.506579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.506583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc30750) 00:14:59.178 [2024-11-20 08:27:46.506589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.178 [2024-11-20 08:27:46.506596] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.506600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.506604] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc30750) 00:14:59.178 [2024-11-20 08:27:46.506610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.178 [2024-11-20 08:27:46.506617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.506622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.506626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc30750) 00:14:59.178 [2024-11-20 08:27:46.506632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.178 [2024-11-20 08:27:46.506637] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:59.178 [2024-11-20 08:27:46.506652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:59.178 [2024-11-20 08:27:46.506660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.506664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc30750) 00:14:59.178 [2024-11-20 08:27:46.506672] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.178 [2024-11-20 08:27:46.506711] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94740, cid 0, qid 0 00:14:59.178 [2024-11-20 08:27:46.506719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc948c0, cid 1, qid 0 00:14:59.178 [2024-11-20 08:27:46.506725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94a40, cid 2, qid 0 00:14:59.178 [2024-11-20 08:27:46.506730] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94bc0, cid 3, qid 0 00:14:59.178 [2024-11-20 08:27:46.506735] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94d40, cid 4, qid 0 00:14:59.178 [2024-11-20 08:27:46.506848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.178 [2024-11-20 08:27:46.506856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.178 [2024-11-20 08:27:46.506860] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.506865] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94d40) on tqpair=0xc30750 00:14:59.178 [2024-11-20 08:27:46.506871] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:14:59.178 [2024-11-20 08:27:46.506877] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:14:59.178 [2024-11-20 08:27:46.506889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.506894] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc30750) 00:14:59.178 [2024-11-20 08:27:46.506902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.178 [2024-11-20 08:27:46.506925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94d40, cid 4, qid 0 00:14:59.178 [2024-11-20 08:27:46.507005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:59.178 [2024-11-20 08:27:46.507013] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:59.178 [2024-11-20 08:27:46.507017] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.507021] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc30750): datao=0, datal=4096, cccid=4 00:14:59.178 [2024-11-20 08:27:46.507026] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc94d40) on tqpair(0xc30750): expected_datao=0, payload_size=4096 00:14:59.178 [2024-11-20 08:27:46.507030] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.507038] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.507042] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.507051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.178 [2024-11-20 08:27:46.507058] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.178 [2024-11-20 08:27:46.507061] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.507065] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94d40) on tqpair=0xc30750 00:14:59.178 [2024-11-20 08:27:46.507081] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:14:59.178 [2024-11-20 08:27:46.507121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.507128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc30750) 00:14:59.178 [2024-11-20 08:27:46.507136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.178 [2024-11-20 08:27:46.507144] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.507149] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.178 [2024-11-20 08:27:46.507153] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc30750) 00:14:59.178 [2024-11-20 08:27:46.507159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.178 [2024-11-20 08:27:46.507203] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94d40, cid 4, qid 0 00:14:59.178 [2024-11-20 08:27:46.507212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94ec0, cid 5, qid 0 00:14:59.179 [2024-11-20 08:27:46.507413] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:59.179 [2024-11-20 08:27:46.507431] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:59.179 [2024-11-20 08:27:46.507436] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:59.179 [2024-11-20 08:27:46.507440] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc30750): datao=0, datal=1024, cccid=4 00:14:59.179 [2024-11-20 08:27:46.507446] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc94d40) on tqpair(0xc30750): expected_datao=0, payload_size=1024 00:14:59.179 [2024-11-20 08:27:46.507451] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.179 [2024-11-20 08:27:46.507458] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:59.179 [2024-11-20 08:27:46.507463] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:59.179 [2024-11-20 08:27:46.507469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.179 [2024-11-20 08:27:46.507475] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.179 [2024-11-20 08:27:46.507479] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.179 [2024-11-20 08:27:46.507484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94ec0) on tqpair=0xc30750 00:14:59.179 [2024-11-20 08:27:46.507507] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.179 [2024-11-20 08:27:46.507516] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.179 [2024-11-20 08:27:46.507520] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.179 [2024-11-20 08:27:46.507524] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94d40) on tqpair=0xc30750 00:14:59.179 [2024-11-20 08:27:46.507550] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.179 [2024-11-20 08:27:46.507556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc30750) 00:14:59.179 [2024-11-20 08:27:46.507564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.179 [2024-11-20 08:27:46.507593] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94d40, cid 4, qid 0 00:14:59.179 [2024-11-20 08:27:46.507698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:59.179 [2024-11-20 08:27:46.507706] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:59.179 [2024-11-20 08:27:46.507710] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:59.179 [2024-11-20 08:27:46.507714] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc30750): datao=0, datal=3072, cccid=4 00:14:59.179 [2024-11-20 08:27:46.507719] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc94d40) on tqpair(0xc30750): expected_datao=0, payload_size=3072 00:14:59.179 [2024-11-20 08:27:46.507724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.179 [2024-11-20 08:27:46.507731] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:59.179 [2024-11-20 08:27:46.507736] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:59.179 [2024-11-20 08:27:46.507745] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.179 [2024-11-20 08:27:46.507751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.179 [2024-11-20 08:27:46.507755] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.179 [2024-11-20 08:27:46.507759] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94d40) on tqpair=0xc30750 00:14:59.179 [2024-11-20 08:27:46.507770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.179 [2024-11-20 08:27:46.507775] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc30750) 00:14:59.179 [2024-11-20 08:27:46.507783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.179 [2024-11-20 08:27:46.507826] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94d40, cid 4, qid 0 00:14:59.179 [2024-11-20 08:27:46.507930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:59.179 [2024-11-20 08:27:46.507937] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:59.179 [2024-11-20 08:27:46.507941] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:59.179 [2024-11-20 08:27:46.507945] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc30750): datao=0, datal=8, cccid=4 00:14:59.179 [2024-11-20 08:27:46.507950] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc94d40) on tqpair(0xc30750): expected_datao=0, payload_size=8 00:14:59.179 [2024-11-20 08:27:46.507955] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.179 ===================================================== 00:14:59.179 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:59.179 ===================================================== 00:14:59.179 Controller Capabilities/Features 00:14:59.179 ================================ 00:14:59.179 Vendor ID: 0000 00:14:59.179 Subsystem Vendor ID: 0000 00:14:59.179 Serial Number: .................... 00:14:59.179 Model Number: ........................................ 00:14:59.179 Firmware Version: 25.01 00:14:59.179 Recommended Arb Burst: 0 00:14:59.179 IEEE OUI Identifier: 00 00 00 00:14:59.179 Multi-path I/O 00:14:59.179 May have multiple subsystem ports: No 00:14:59.179 May have multiple controllers: No 00:14:59.179 Associated with SR-IOV VF: No 00:14:59.179 Max Data Transfer Size: 131072 00:14:59.179 Max Number of Namespaces: 0 00:14:59.179 Max Number of I/O Queues: 1024 00:14:59.179 NVMe Specification Version (VS): 1.3 00:14:59.179 NVMe Specification Version (Identify): 1.3 00:14:59.179 Maximum Queue Entries: 128 00:14:59.179 Contiguous Queues Required: Yes 00:14:59.179 Arbitration Mechanisms Supported 00:14:59.179 Weighted Round Robin: Not Supported 00:14:59.179 Vendor Specific: Not Supported 00:14:59.179 Reset Timeout: 15000 ms 00:14:59.179 Doorbell Stride: 4 bytes 00:14:59.179 NVM Subsystem Reset: Not Supported 00:14:59.179 Command Sets Supported 00:14:59.179 NVM Command Set: Supported 00:14:59.179 Boot Partition: Not Supported 00:14:59.179 Memory Page Size Minimum: 4096 bytes 00:14:59.179 Memory Page Size Maximum: 4096 bytes 00:14:59.179 Persistent Memory Region: Not Supported 00:14:59.179 Optional Asynchronous Events Supported 00:14:59.179 Namespace Attribute Notices: Not Supported 00:14:59.179 Firmware Activation Notices: Not Supported 00:14:59.179 ANA Change Notices: Not Supported 00:14:59.179 PLE Aggregate Log Change Notices: Not Supported 00:14:59.179 LBA Status Info Alert Notices: Not Supported 00:14:59.179 EGE Aggregate Log Change Notices: Not Supported 00:14:59.179 Normal NVM Subsystem Shutdown event: Not Supported 00:14:59.179 Zone Descriptor Change Notices: Not Supported 00:14:59.179 Discovery Log Change Notices: Supported 00:14:59.179 Controller Attributes 00:14:59.179 128-bit Host Identifier: Not Supported 00:14:59.179 Non-Operational Permissive Mode: Not Supported 00:14:59.179 NVM Sets: Not Supported 00:14:59.179 Read Recovery Levels: Not Supported 00:14:59.179 Endurance Groups: Not Supported 00:14:59.179 Predictable Latency Mode: Not Supported 00:14:59.179 Traffic Based Keep ALive: Not Supported 00:14:59.179 Namespace Granularity: Not Supported 00:14:59.179 SQ Associations: Not Supported 00:14:59.179 UUID List: Not Supported 00:14:59.179 Multi-Domain Subsystem: Not Supported 00:14:59.179 Fixed Capacity Management: Not Supported 00:14:59.179 Variable Capacity Management: Not Supported 00:14:59.179 Delete Endurance Group: Not Supported 00:14:59.179 Delete NVM Set: Not Supported 00:14:59.179 Extended LBA Formats Supported: Not Supported 00:14:59.179 Flexible Data Placement Supported: Not Supported 00:14:59.179 00:14:59.179 Controller Memory Buffer Support 00:14:59.179 ================================ 00:14:59.179 Supported: No 00:14:59.179 00:14:59.179 Persistent Memory Region Support 00:14:59.179 ================================ 00:14:59.179 Supported: No 00:14:59.179 00:14:59.179 Admin Command Set Attributes 00:14:59.179 ============================ 00:14:59.179 Security Send/Receive: Not Supported 00:14:59.179 Format NVM: Not Supported 00:14:59.179 Firmware Activate/Download: Not Supported 00:14:59.179 Namespace Management: Not Supported 00:14:59.179 Device Self-Test: Not Supported 00:14:59.179 Directives: Not Supported 00:14:59.179 NVMe-MI: Not Supported 00:14:59.179 Virtualization Management: Not Supported 00:14:59.179 Doorbell Buffer Config: Not Supported 00:14:59.179 Get LBA Status Capability: Not Supported 00:14:59.179 Command & Feature Lockdown Capability: Not Supported 00:14:59.179 Abort Command Limit: 1 00:14:59.179 Async Event Request Limit: 4 00:14:59.179 Number of Firmware Slots: N/A 00:14:59.179 Firmware Slot 1 Read-Only: N/A 00:14:59.179 Firmware Activation Without Reset: N/A 00:14:59.179 Multiple Update Detection Support: N/A 00:14:59.179 Firmware Update Granularity: No Information Provided 00:14:59.179 Per-Namespace SMART Log: No 00:14:59.179 Asymmetric Namespace Access Log Page: Not Supported 00:14:59.179 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:59.179 Command Effects Log Page: Not Supported 00:14:59.179 Get Log Page Extended Data: Supported 00:14:59.179 Telemetry Log Pages: Not Supported 00:14:59.179 Persistent Event Log Pages: Not Supported 00:14:59.179 Supported Log Pages Log Page: May Support 00:14:59.179 Commands Supported & Effects Log Page: Not Supported 00:14:59.179 Feature Identifiers & Effects Log Page:May Support 00:14:59.179 NVMe-MI Commands & Effects Log Page: May Support 00:14:59.179 Data Area 4 for Telemetry Log: Not Supported 00:14:59.179 Error Log Page Entries Supported: 128 00:14:59.179 Keep Alive: Not Supported 00:14:59.179 00:14:59.179 NVM Command Set Attributes 00:14:59.179 ========================== 00:14:59.179 Submission Queue Entry Size 00:14:59.179 Max: 1 00:14:59.179 Min: 1 00:14:59.179 Completion Queue Entry Size 00:14:59.180 Max: 1 00:14:59.180 Min: 1 00:14:59.180 Number of Namespaces: 0 00:14:59.180 Compare Command: Not Supported 00:14:59.180 Write Uncorrectable Command: Not Supported 00:14:59.180 Dataset Management Command: Not Supported 00:14:59.180 Write Zeroes Command: Not Supported 00:14:59.180 Set Features Save Field: Not Supported 00:14:59.180 Reservations: Not Supported 00:14:59.180 Timestamp: Not Supported 00:14:59.180 Copy: Not Supported 00:14:59.180 Volatile Write Cache: Not Present 00:14:59.180 Atomic Write Unit (Normal): 1 00:14:59.180 Atomic Write Unit (PFail): 1 00:14:59.180 Atomic Compare & Write Unit: 1 00:14:59.180 Fused Compare & Write: Supported 00:14:59.180 Scatter-Gather List 00:14:59.180 SGL Command Set: Supported 00:14:59.180 SGL Keyed: Supported 00:14:59.180 SGL Bit Bucket Descriptor: Not Supported 00:14:59.180 SGL Metadata Pointer: Not Supported 00:14:59.180 Oversized SGL: Not Supported 00:14:59.180 SGL Metadata Address: Not Supported 00:14:59.180 SGL Offset: Supported 00:14:59.180 Transport SGL Data Block: Not Supported 00:14:59.180 Replay Protected Memory Block: Not Supported 00:14:59.180 00:14:59.180 Firmware Slot Information 00:14:59.180 ========================= 00:14:59.180 Active slot: 0 00:14:59.180 00:14:59.180 00:14:59.180 Error Log 00:14:59.180 ========= 00:14:59.180 00:14:59.180 Active Namespaces 00:14:59.180 ================= 00:14:59.180 Discovery Log Page 00:14:59.180 ================== 00:14:59.180 Generation Counter: 2 00:14:59.180 Number of Records: 2 00:14:59.180 Record Format: 0 00:14:59.180 00:14:59.180 Discovery Log Entry 0 00:14:59.180 ---------------------- 00:14:59.180 Transport Type: 3 (TCP) 00:14:59.180 Address Family: 1 (IPv4) 00:14:59.180 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:59.180 Entry Flags: 00:14:59.180 Duplicate Returned Information: 1 00:14:59.180 Explicit Persistent Connection Support for Discovery: 1 00:14:59.180 Transport Requirements: 00:14:59.180 Secure Channel: Not Required 00:14:59.180 Port ID: 0 (0x0000) 00:14:59.180 Controller ID: 65535 (0xffff) 00:14:59.180 Admin Max SQ Size: 128 00:14:59.180 Transport Service Identifier: 4420 00:14:59.180 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:59.180 Transport Address: 10.0.0.3 00:14:59.180 Discovery Log Entry 1 00:14:59.180 ---------------------- 00:14:59.180 Transport Type: 3 (TCP) 00:14:59.180 Address Family: 1 (IPv4) 00:14:59.180 Subsystem Type: 2 (NVM Subsystem) 00:14:59.180 Entry Flags: 00:14:59.180 Duplicate Returned Information: 0 00:14:59.180 Explicit Persistent Connection Support for Discovery: 0 00:14:59.180 Transport Requirements: 00:14:59.180 Secure Channel: Not Required 00:14:59.180 Port ID: 0 (0x0000) 00:14:59.180 Controller ID: 65535 (0xffff) 00:14:59.180 Admin Max SQ Size: 128 00:14:59.180 Transport Service Identifier: 4420 00:14:59.180 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:59.180 Transport Address: 10.0.0.3 [2024-11-20 08:27:46.507963] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:59.180 [2024-11-20 08:27:46.507967] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:59.180 [2024-11-20 08:27:46.507994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.180 [2024-11-20 08:27:46.508002] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.180 [2024-11-20 08:27:46.508006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.180 [2024-11-20 08:27:46.508010] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94d40) on tqpair=0xc30750 00:14:59.180 [2024-11-20 08:27:46.508131] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:14:59.180 [2024-11-20 08:27:46.508146] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94740) on tqpair=0xc30750 00:14:59.180 [2024-11-20 08:27:46.508153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.180 [2024-11-20 08:27:46.508160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc948c0) on tqpair=0xc30750 00:14:59.180 [2024-11-20 08:27:46.508165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.180 [2024-11-20 08:27:46.508170] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94a40) on tqpair=0xc30750 00:14:59.180 [2024-11-20 08:27:46.508175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.180 [2024-11-20 08:27:46.508181] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94bc0) on tqpair=0xc30750 00:14:59.180 [2024-11-20 08:27:46.508186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.180 [2024-11-20 08:27:46.508210] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.180 [2024-11-20 08:27:46.508215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.180 [2024-11-20 08:27:46.508219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc30750) 00:14:59.180 [2024-11-20 08:27:46.508228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.180 [2024-11-20 08:27:46.508252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94bc0, cid 3, qid 0 00:14:59.180 [2024-11-20 08:27:46.508321] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.180 [2024-11-20 08:27:46.508329] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.180 [2024-11-20 08:27:46.508333] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.180 [2024-11-20 08:27:46.508337] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94bc0) on tqpair=0xc30750 00:14:59.180 [2024-11-20 08:27:46.508345] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.180 [2024-11-20 08:27:46.508350] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.180 [2024-11-20 08:27:46.508354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc30750) 00:14:59.180 [2024-11-20 08:27:46.508362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.180 [2024-11-20 08:27:46.508387] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94bc0, cid 3, qid 0 00:14:59.180 [2024-11-20 08:27:46.508471] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.180 [2024-11-20 08:27:46.508479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.180 [2024-11-20 08:27:46.508483] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.180 [2024-11-20 08:27:46.508487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94bc0) on tqpair=0xc30750 00:14:59.180 [2024-11-20 08:27:46.508493] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:14:59.180 [2024-11-20 08:27:46.508498] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:14:59.180 [2024-11-20 08:27:46.508509] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.180 [2024-11-20 08:27:46.508514] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.180 [2024-11-20 08:27:46.508518] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc30750) 00:14:59.180 [2024-11-20 08:27:46.508526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.180 [2024-11-20 08:27:46.508546] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94bc0, cid 3, qid 0 00:14:59.180 [2024-11-20 08:27:46.508618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.180 [2024-11-20 08:27:46.508626] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.180 [2024-11-20 08:27:46.508630] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.180 [2024-11-20 08:27:46.508634] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94bc0) on tqpair=0xc30750 00:14:59.180 [2024-11-20 08:27:46.508646] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.180 [2024-11-20 08:27:46.508667] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.180 [2024-11-20 08:27:46.508670] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc30750) 00:14:59.180 [2024-11-20 08:27:46.508678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.180 [2024-11-20 08:27:46.508698] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94bc0, cid 3, qid 0 00:14:59.180 [2024-11-20 08:27:46.508775] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.180 [2024-11-20 08:27:46.508782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.180 [2024-11-20 08:27:46.508786] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.180 [2024-11-20 08:27:46.508790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94bc0) on tqpair=0xc30750 00:14:59.180 [2024-11-20 08:27:46.508801] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.180 [2024-11-20 08:27:46.508806] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.180 [2024-11-20 08:27:46.508810] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc30750) 00:14:59.180 [2024-11-20 08:27:46.508834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.180 [2024-11-20 08:27:46.508854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94bc0, cid 3, qid 0 00:14:59.180 [2024-11-20 08:27:46.512865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.180 [2024-11-20 08:27:46.512895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.180 [2024-11-20 08:27:46.512901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.180 [2024-11-20 08:27:46.512905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94bc0) on tqpair=0xc30750 00:14:59.180 [2024-11-20 08:27:46.512923] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.180 [2024-11-20 08:27:46.512929] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.180 [2024-11-20 08:27:46.512933] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc30750) 00:14:59.181 [2024-11-20 08:27:46.512942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.181 [2024-11-20 08:27:46.512971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc94bc0, cid 3, qid 0 00:14:59.181 [2024-11-20 08:27:46.513060] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.181 [2024-11-20 08:27:46.513067] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.181 [2024-11-20 08:27:46.513071] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.181 [2024-11-20 08:27:46.513076] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc94bc0) on tqpair=0xc30750 00:14:59.181 [2024-11-20 08:27:46.513085] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:14:59.181 00:14:59.181 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:59.181 [2024-11-20 08:27:46.560277] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:14:59.181 [2024-11-20 08:27:46.560366] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74211 ] 00:14:59.181 [2024-11-20 08:27:46.716682] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:14:59.181 [2024-11-20 08:27:46.716757] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:59.181 [2024-11-20 08:27:46.716765] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:59.181 [2024-11-20 08:27:46.716779] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:59.181 [2024-11-20 08:27:46.716790] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:59.181 [2024-11-20 08:27:46.721183] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:14:59.181 [2024-11-20 08:27:46.721305] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x674750 0 00:14:59.181 [2024-11-20 08:27:46.728871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:59.181 [2024-11-20 08:27:46.728898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:59.181 [2024-11-20 08:27:46.728921] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:59.181 [2024-11-20 08:27:46.728924] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:59.181 [2024-11-20 08:27:46.728965] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.181 [2024-11-20 08:27:46.728972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.181 [2024-11-20 08:27:46.728976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x674750) 00:14:59.181 [2024-11-20 08:27:46.729006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:59.181 [2024-11-20 08:27:46.729037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8740, cid 0, qid 0 00:14:59.444 [2024-11-20 08:27:46.736879] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.444 [2024-11-20 08:27:46.736919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.444 [2024-11-20 08:27:46.736942] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.444 [2024-11-20 08:27:46.736947] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8740) on tqpair=0x674750 00:14:59.444 [2024-11-20 08:27:46.736962] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:59.444 [2024-11-20 08:27:46.736971] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:14:59.444 [2024-11-20 08:27:46.736978] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:14:59.444 [2024-11-20 08:27:46.736995] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.444 [2024-11-20 08:27:46.737001] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.444 [2024-11-20 08:27:46.737005] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x674750) 00:14:59.444 [2024-11-20 08:27:46.737031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.444 [2024-11-20 08:27:46.737065] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8740, cid 0, qid 0 00:14:59.444 [2024-11-20 08:27:46.737187] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.445 [2024-11-20 08:27:46.737195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.445 [2024-11-20 08:27:46.737199] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.737203] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8740) on tqpair=0x674750 00:14:59.445 [2024-11-20 08:27:46.737210] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:14:59.445 [2024-11-20 08:27:46.737233] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:14:59.445 [2024-11-20 08:27:46.737242] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.737246] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.737250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x674750) 00:14:59.445 [2024-11-20 08:27:46.737258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.445 [2024-11-20 08:27:46.737277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8740, cid 0, qid 0 00:14:59.445 [2024-11-20 08:27:46.737344] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.445 [2024-11-20 08:27:46.737352] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.445 [2024-11-20 08:27:46.737355] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.737360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8740) on tqpair=0x674750 00:14:59.445 [2024-11-20 08:27:46.737366] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:14:59.445 [2024-11-20 08:27:46.737375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:14:59.445 [2024-11-20 08:27:46.737382] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.737388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.737391] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x674750) 00:14:59.445 [2024-11-20 08:27:46.737399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.445 [2024-11-20 08:27:46.737417] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8740, cid 0, qid 0 00:14:59.445 [2024-11-20 08:27:46.737508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.445 [2024-11-20 08:27:46.737515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.445 [2024-11-20 08:27:46.737519] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.737523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8740) on tqpair=0x674750 00:14:59.445 [2024-11-20 08:27:46.737529] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:59.445 [2024-11-20 08:27:46.737540] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.737545] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.737549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x674750) 00:14:59.445 [2024-11-20 08:27:46.737557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.445 [2024-11-20 08:27:46.737577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8740, cid 0, qid 0 00:14:59.445 [2024-11-20 08:27:46.737656] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.445 [2024-11-20 08:27:46.737663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.445 [2024-11-20 08:27:46.737666] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.737676] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8740) on tqpair=0x674750 00:14:59.445 [2024-11-20 08:27:46.737681] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:14:59.445 [2024-11-20 08:27:46.737687] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:14:59.445 [2024-11-20 08:27:46.737695] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:59.445 [2024-11-20 08:27:46.737807] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:14:59.445 [2024-11-20 08:27:46.737831] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:59.445 [2024-11-20 08:27:46.737843] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.737848] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.737852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x674750) 00:14:59.445 [2024-11-20 08:27:46.737859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.445 [2024-11-20 08:27:46.737881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8740, cid 0, qid 0 00:14:59.445 [2024-11-20 08:27:46.737960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.445 [2024-11-20 08:27:46.737976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.445 [2024-11-20 08:27:46.737981] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.737985] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8740) on tqpair=0x674750 00:14:59.445 [2024-11-20 08:27:46.737991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:59.445 [2024-11-20 08:27:46.738003] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.738015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.738019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x674750) 00:14:59.445 [2024-11-20 08:27:46.738026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.445 [2024-11-20 08:27:46.738047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8740, cid 0, qid 0 00:14:59.445 [2024-11-20 08:27:46.738105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.445 [2024-11-20 08:27:46.738112] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.445 [2024-11-20 08:27:46.738116] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.738120] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8740) on tqpair=0x674750 00:14:59.445 [2024-11-20 08:27:46.738125] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:59.445 [2024-11-20 08:27:46.738131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:14:59.445 [2024-11-20 08:27:46.738139] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:14:59.445 [2024-11-20 08:27:46.738156] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:14:59.445 [2024-11-20 08:27:46.738168] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.738172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x674750) 00:14:59.445 [2024-11-20 08:27:46.738180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.445 [2024-11-20 08:27:46.738200] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8740, cid 0, qid 0 00:14:59.445 [2024-11-20 08:27:46.738335] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:59.445 [2024-11-20 08:27:46.738342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:59.445 [2024-11-20 08:27:46.738346] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.738350] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x674750): datao=0, datal=4096, cccid=0 00:14:59.445 [2024-11-20 08:27:46.738355] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6d8740) on tqpair(0x674750): expected_datao=0, payload_size=4096 00:14:59.445 [2024-11-20 08:27:46.738360] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.738369] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.738373] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.738382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.445 [2024-11-20 08:27:46.738388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.445 [2024-11-20 08:27:46.738391] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.738395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8740) on tqpair=0x674750 00:14:59.445 [2024-11-20 08:27:46.738404] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:14:59.445 [2024-11-20 08:27:46.738410] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:14:59.445 [2024-11-20 08:27:46.738415] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:14:59.445 [2024-11-20 08:27:46.738419] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:14:59.445 [2024-11-20 08:27:46.738424] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:14:59.445 [2024-11-20 08:27:46.738429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:14:59.445 [2024-11-20 08:27:46.738444] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:14:59.445 [2024-11-20 08:27:46.738453] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.738458] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.738461] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x674750) 00:14:59.445 [2024-11-20 08:27:46.738469] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:59.445 [2024-11-20 08:27:46.738490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8740, cid 0, qid 0 00:14:59.445 [2024-11-20 08:27:46.738563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.445 [2024-11-20 08:27:46.738570] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.445 [2024-11-20 08:27:46.738574] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.445 [2024-11-20 08:27:46.738578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8740) on tqpair=0x674750 00:14:59.446 [2024-11-20 08:27:46.738586] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.738590] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.738594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x674750) 00:14:59.446 [2024-11-20 08:27:46.738601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.446 [2024-11-20 08:27:46.738607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.738611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.738615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x674750) 00:14:59.446 [2024-11-20 08:27:46.738621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.446 [2024-11-20 08:27:46.738627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.738631] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.738635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x674750) 00:14:59.446 [2024-11-20 08:27:46.738641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.446 [2024-11-20 08:27:46.738647] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.738651] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.738654] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.446 [2024-11-20 08:27:46.738660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.446 [2024-11-20 08:27:46.738665] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:59.446 [2024-11-20 08:27:46.738679] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:59.446 [2024-11-20 08:27:46.738687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.738691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x674750) 00:14:59.446 [2024-11-20 08:27:46.738698] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.446 [2024-11-20 08:27:46.738719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8740, cid 0, qid 0 00:14:59.446 [2024-11-20 08:27:46.738726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d88c0, cid 1, qid 0 00:14:59.446 [2024-11-20 08:27:46.738731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8a40, cid 2, qid 0 00:14:59.446 [2024-11-20 08:27:46.738736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.446 [2024-11-20 08:27:46.738741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8d40, cid 4, qid 0 00:14:59.446 [2024-11-20 08:27:46.738894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.446 [2024-11-20 08:27:46.738903] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.446 [2024-11-20 08:27:46.738907] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.738911] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8d40) on tqpair=0x674750 00:14:59.446 [2024-11-20 08:27:46.738917] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:14:59.446 [2024-11-20 08:27:46.738923] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:59.446 [2024-11-20 08:27:46.738933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:14:59.446 [2024-11-20 08:27:46.738945] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:59.446 [2024-11-20 08:27:46.738953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.738958] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.738962] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x674750) 00:14:59.446 [2024-11-20 08:27:46.738970] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:59.446 [2024-11-20 08:27:46.738991] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8d40, cid 4, qid 0 00:14:59.446 [2024-11-20 08:27:46.739062] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.446 [2024-11-20 08:27:46.739069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.446 [2024-11-20 08:27:46.739073] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.739077] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8d40) on tqpair=0x674750 00:14:59.446 [2024-11-20 08:27:46.739144] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:14:59.446 [2024-11-20 08:27:46.739157] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:59.446 [2024-11-20 08:27:46.739166] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.739170] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x674750) 00:14:59.446 [2024-11-20 08:27:46.739178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.446 [2024-11-20 08:27:46.739198] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8d40, cid 4, qid 0 00:14:59.446 [2024-11-20 08:27:46.739304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:59.446 [2024-11-20 08:27:46.739311] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:59.446 [2024-11-20 08:27:46.739315] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.739319] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x674750): datao=0, datal=4096, cccid=4 00:14:59.446 [2024-11-20 08:27:46.739324] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6d8d40) on tqpair(0x674750): expected_datao=0, payload_size=4096 00:14:59.446 [2024-11-20 08:27:46.739329] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.739336] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.739340] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.739348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.446 [2024-11-20 08:27:46.739354] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.446 [2024-11-20 08:27:46.739358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.739362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8d40) on tqpair=0x674750 00:14:59.446 [2024-11-20 08:27:46.739378] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:14:59.446 [2024-11-20 08:27:46.739389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:14:59.446 [2024-11-20 08:27:46.739401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:14:59.446 [2024-11-20 08:27:46.739409] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.739413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x674750) 00:14:59.446 [2024-11-20 08:27:46.739421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.446 [2024-11-20 08:27:46.739441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8d40, cid 4, qid 0 00:14:59.446 [2024-11-20 08:27:46.739599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:59.446 [2024-11-20 08:27:46.739607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:59.446 [2024-11-20 08:27:46.739611] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.739615] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x674750): datao=0, datal=4096, cccid=4 00:14:59.446 [2024-11-20 08:27:46.739620] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6d8d40) on tqpair(0x674750): expected_datao=0, payload_size=4096 00:14:59.446 [2024-11-20 08:27:46.739625] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.739633] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.739637] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.739645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.446 [2024-11-20 08:27:46.739652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.446 [2024-11-20 08:27:46.739655] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.446 [2024-11-20 08:27:46.739659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8d40) on tqpair=0x674750 00:14:59.446 [2024-11-20 08:27:46.739680] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:59.447 [2024-11-20 08:27:46.739692] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:59.447 [2024-11-20 08:27:46.739702] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.739706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x674750) 00:14:59.447 [2024-11-20 08:27:46.739714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.447 [2024-11-20 08:27:46.739736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8d40, cid 4, qid 0 00:14:59.447 [2024-11-20 08:27:46.739814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:59.447 [2024-11-20 08:27:46.739835] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:59.447 [2024-11-20 08:27:46.739840] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.739844] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x674750): datao=0, datal=4096, cccid=4 00:14:59.447 [2024-11-20 08:27:46.739849] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6d8d40) on tqpair(0x674750): expected_datao=0, payload_size=4096 00:14:59.447 [2024-11-20 08:27:46.739854] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.739876] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.739880] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.739889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.447 [2024-11-20 08:27:46.739896] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.447 [2024-11-20 08:27:46.739899] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.739903] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8d40) on tqpair=0x674750 00:14:59.447 [2024-11-20 08:27:46.739913] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:59.447 [2024-11-20 08:27:46.739923] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:14:59.447 [2024-11-20 08:27:46.739934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:14:59.447 [2024-11-20 08:27:46.739942] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:59.447 [2024-11-20 08:27:46.739947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:59.447 [2024-11-20 08:27:46.739953] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:14:59.447 [2024-11-20 08:27:46.739959] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:14:59.447 [2024-11-20 08:27:46.739963] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:14:59.447 [2024-11-20 08:27:46.739969] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:14:59.447 [2024-11-20 08:27:46.739986] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.739991] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x674750) 00:14:59.447 [2024-11-20 08:27:46.739999] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.447 [2024-11-20 08:27:46.740006] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.740010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.740014] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x674750) 00:14:59.447 [2024-11-20 08:27:46.740020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.447 [2024-11-20 08:27:46.740048] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8d40, cid 4, qid 0 00:14:59.447 [2024-11-20 08:27:46.740056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8ec0, cid 5, qid 0 00:14:59.447 [2024-11-20 08:27:46.740143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.447 [2024-11-20 08:27:46.740150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.447 [2024-11-20 08:27:46.740154] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.740158] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8d40) on tqpair=0x674750 00:14:59.447 [2024-11-20 08:27:46.740165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.447 [2024-11-20 08:27:46.740171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.447 [2024-11-20 08:27:46.740175] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.740179] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8ec0) on tqpair=0x674750 00:14:59.447 [2024-11-20 08:27:46.740189] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.740194] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x674750) 00:14:59.447 [2024-11-20 08:27:46.740201] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.447 [2024-11-20 08:27:46.740219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8ec0, cid 5, qid 0 00:14:59.447 [2024-11-20 08:27:46.740278] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.447 [2024-11-20 08:27:46.740285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.447 [2024-11-20 08:27:46.740289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.740293] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8ec0) on tqpair=0x674750 00:14:59.447 [2024-11-20 08:27:46.740304] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.740308] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x674750) 00:14:59.447 [2024-11-20 08:27:46.740315] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.447 [2024-11-20 08:27:46.740332] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8ec0, cid 5, qid 0 00:14:59.447 [2024-11-20 08:27:46.740405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.447 [2024-11-20 08:27:46.740412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.447 [2024-11-20 08:27:46.740416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.740420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8ec0) on tqpair=0x674750 00:14:59.447 [2024-11-20 08:27:46.740430] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.740435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x674750) 00:14:59.447 [2024-11-20 08:27:46.740442] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.447 [2024-11-20 08:27:46.740458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8ec0, cid 5, qid 0 00:14:59.447 [2024-11-20 08:27:46.740521] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.447 [2024-11-20 08:27:46.740540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.447 [2024-11-20 08:27:46.740545] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.740549] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8ec0) on tqpair=0x674750 00:14:59.447 [2024-11-20 08:27:46.740586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.740592] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x674750) 00:14:59.447 [2024-11-20 08:27:46.740600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.447 [2024-11-20 08:27:46.740608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.740612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x674750) 00:14:59.447 [2024-11-20 08:27:46.740619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.447 [2024-11-20 08:27:46.740627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.740631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x674750) 00:14:59.447 [2024-11-20 08:27:46.740638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.447 [2024-11-20 08:27:46.740646] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.740651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x674750) 00:14:59.447 [2024-11-20 08:27:46.740657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.447 [2024-11-20 08:27:46.740679] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8ec0, cid 5, qid 0 00:14:59.447 [2024-11-20 08:27:46.740687] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8d40, cid 4, qid 0 00:14:59.447 [2024-11-20 08:27:46.740692] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d9040, cid 6, qid 0 00:14:59.447 [2024-11-20 08:27:46.740697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d91c0, cid 7, qid 0 00:14:59.447 [2024-11-20 08:27:46.744906] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:59.447 [2024-11-20 08:27:46.744926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:59.447 [2024-11-20 08:27:46.744948] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.744952] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x674750): datao=0, datal=8192, cccid=5 00:14:59.447 [2024-11-20 08:27:46.744957] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6d8ec0) on tqpair(0x674750): expected_datao=0, payload_size=8192 00:14:59.447 [2024-11-20 08:27:46.744962] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.744986] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.744991] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.744997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:59.447 [2024-11-20 08:27:46.745003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:59.447 [2024-11-20 08:27:46.745006] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.745020] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x674750): datao=0, datal=512, cccid=4 00:14:59.447 [2024-11-20 08:27:46.745024] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6d8d40) on tqpair(0x674750): expected_datao=0, payload_size=512 00:14:59.447 [2024-11-20 08:27:46.745029] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.745035] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:59.447 [2024-11-20 08:27:46.745039] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:59.448 [2024-11-20 08:27:46.745045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:59.448 [2024-11-20 08:27:46.745050] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:59.448 [2024-11-20 08:27:46.745054] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:59.448 [2024-11-20 08:27:46.745057] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x674750): datao=0, datal=512, cccid=6 00:14:59.448 [2024-11-20 08:27:46.745062] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6d9040) on tqpair(0x674750): expected_datao=0, payload_size=512 00:14:59.448 [2024-11-20 08:27:46.745066] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.448 [2024-11-20 08:27:46.745073] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:59.448 [2024-11-20 08:27:46.745076] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:59.448 [2024-11-20 08:27:46.745082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:59.448 [2024-11-20 08:27:46.745087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:59.448 [2024-11-20 08:27:46.745091] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:59.448 [2024-11-20 08:27:46.745110] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x674750): datao=0, datal=4096, cccid=7 00:14:59.448 [2024-11-20 08:27:46.745115] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6d91c0) on tqpair(0x674750): expected_datao=0, payload_size=4096 00:14:59.448 [2024-11-20 08:27:46.745119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.448 [2024-11-20 08:27:46.745136] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:59.448 [2024-11-20 08:27:46.745139] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:59.448 [2024-11-20 08:27:46.745145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.448 [2024-11-20 08:27:46.745151] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.448 [2024-11-20 08:27:46.745155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.448 [2024-11-20 08:27:46.745159] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8ec0) on tqpair=0x674750 00:14:59.448 [2024-11-20 08:27:46.745179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.448 [2024-11-20 08:27:46.745186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.448 [2024-11-20 08:27:46.745190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.448 [2024-11-20 08:27:46.745194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8d40) on tqpair=0x674750 00:14:59.448 [2024-11-20 08:27:46.745207] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.448 [2024-11-20 08:27:46.745214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.448 [2024-11-20 08:27:46.745217] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.448 [2024-11-20 08:27:46.745221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d9040) on tqpair=0x674750 00:14:59.448 [2024-11-20 08:27:46.745229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.448 [2024-11-20 08:27:46.745235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.448 [2024-11-20 08:27:46.745239] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.448 [2024-11-20 08:27:46.745243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d91c0) on tqpair=0x674750 00:14:59.448 ===================================================== 00:14:59.448 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:59.448 ===================================================== 00:14:59.448 Controller Capabilities/Features 00:14:59.448 ================================ 00:14:59.448 Vendor ID: 8086 00:14:59.448 Subsystem Vendor ID: 8086 00:14:59.448 Serial Number: SPDK00000000000001 00:14:59.448 Model Number: SPDK bdev Controller 00:14:59.448 Firmware Version: 25.01 00:14:59.448 Recommended Arb Burst: 6 00:14:59.448 IEEE OUI Identifier: e4 d2 5c 00:14:59.448 Multi-path I/O 00:14:59.448 May have multiple subsystem ports: Yes 00:14:59.448 May have multiple controllers: Yes 00:14:59.448 Associated with SR-IOV VF: No 00:14:59.448 Max Data Transfer Size: 131072 00:14:59.448 Max Number of Namespaces: 32 00:14:59.448 Max Number of I/O Queues: 127 00:14:59.448 NVMe Specification Version (VS): 1.3 00:14:59.448 NVMe Specification Version (Identify): 1.3 00:14:59.448 Maximum Queue Entries: 128 00:14:59.448 Contiguous Queues Required: Yes 00:14:59.448 Arbitration Mechanisms Supported 00:14:59.448 Weighted Round Robin: Not Supported 00:14:59.448 Vendor Specific: Not Supported 00:14:59.448 Reset Timeout: 15000 ms 00:14:59.448 Doorbell Stride: 4 bytes 00:14:59.448 NVM Subsystem Reset: Not Supported 00:14:59.448 Command Sets Supported 00:14:59.448 NVM Command Set: Supported 00:14:59.448 Boot Partition: Not Supported 00:14:59.448 Memory Page Size Minimum: 4096 bytes 00:14:59.448 Memory Page Size Maximum: 4096 bytes 00:14:59.448 Persistent Memory Region: Not Supported 00:14:59.448 Optional Asynchronous Events Supported 00:14:59.448 Namespace Attribute Notices: Supported 00:14:59.448 Firmware Activation Notices: Not Supported 00:14:59.448 ANA Change Notices: Not Supported 00:14:59.448 PLE Aggregate Log Change Notices: Not Supported 00:14:59.448 LBA Status Info Alert Notices: Not Supported 00:14:59.448 EGE Aggregate Log Change Notices: Not Supported 00:14:59.448 Normal NVM Subsystem Shutdown event: Not Supported 00:14:59.448 Zone Descriptor Change Notices: Not Supported 00:14:59.448 Discovery Log Change Notices: Not Supported 00:14:59.448 Controller Attributes 00:14:59.448 128-bit Host Identifier: Supported 00:14:59.448 Non-Operational Permissive Mode: Not Supported 00:14:59.448 NVM Sets: Not Supported 00:14:59.448 Read Recovery Levels: Not Supported 00:14:59.448 Endurance Groups: Not Supported 00:14:59.448 Predictable Latency Mode: Not Supported 00:14:59.448 Traffic Based Keep ALive: Not Supported 00:14:59.448 Namespace Granularity: Not Supported 00:14:59.448 SQ Associations: Not Supported 00:14:59.448 UUID List: Not Supported 00:14:59.448 Multi-Domain Subsystem: Not Supported 00:14:59.448 Fixed Capacity Management: Not Supported 00:14:59.448 Variable Capacity Management: Not Supported 00:14:59.448 Delete Endurance Group: Not Supported 00:14:59.448 Delete NVM Set: Not Supported 00:14:59.448 Extended LBA Formats Supported: Not Supported 00:14:59.448 Flexible Data Placement Supported: Not Supported 00:14:59.448 00:14:59.448 Controller Memory Buffer Support 00:14:59.448 ================================ 00:14:59.448 Supported: No 00:14:59.448 00:14:59.448 Persistent Memory Region Support 00:14:59.448 ================================ 00:14:59.448 Supported: No 00:14:59.448 00:14:59.448 Admin Command Set Attributes 00:14:59.448 ============================ 00:14:59.448 Security Send/Receive: Not Supported 00:14:59.448 Format NVM: Not Supported 00:14:59.448 Firmware Activate/Download: Not Supported 00:14:59.448 Namespace Management: Not Supported 00:14:59.448 Device Self-Test: Not Supported 00:14:59.448 Directives: Not Supported 00:14:59.448 NVMe-MI: Not Supported 00:14:59.448 Virtualization Management: Not Supported 00:14:59.448 Doorbell Buffer Config: Not Supported 00:14:59.448 Get LBA Status Capability: Not Supported 00:14:59.448 Command & Feature Lockdown Capability: Not Supported 00:14:59.448 Abort Command Limit: 4 00:14:59.448 Async Event Request Limit: 4 00:14:59.448 Number of Firmware Slots: N/A 00:14:59.448 Firmware Slot 1 Read-Only: N/A 00:14:59.448 Firmware Activation Without Reset: N/A 00:14:59.448 Multiple Update Detection Support: N/A 00:14:59.448 Firmware Update Granularity: No Information Provided 00:14:59.448 Per-Namespace SMART Log: No 00:14:59.448 Asymmetric Namespace Access Log Page: Not Supported 00:14:59.448 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:59.448 Command Effects Log Page: Supported 00:14:59.448 Get Log Page Extended Data: Supported 00:14:59.448 Telemetry Log Pages: Not Supported 00:14:59.448 Persistent Event Log Pages: Not Supported 00:14:59.448 Supported Log Pages Log Page: May Support 00:14:59.448 Commands Supported & Effects Log Page: Not Supported 00:14:59.448 Feature Identifiers & Effects Log Page:May Support 00:14:59.448 NVMe-MI Commands & Effects Log Page: May Support 00:14:59.448 Data Area 4 for Telemetry Log: Not Supported 00:14:59.448 Error Log Page Entries Supported: 128 00:14:59.448 Keep Alive: Supported 00:14:59.448 Keep Alive Granularity: 10000 ms 00:14:59.448 00:14:59.448 NVM Command Set Attributes 00:14:59.448 ========================== 00:14:59.448 Submission Queue Entry Size 00:14:59.448 Max: 64 00:14:59.448 Min: 64 00:14:59.448 Completion Queue Entry Size 00:14:59.448 Max: 16 00:14:59.448 Min: 16 00:14:59.448 Number of Namespaces: 32 00:14:59.448 Compare Command: Supported 00:14:59.448 Write Uncorrectable Command: Not Supported 00:14:59.448 Dataset Management Command: Supported 00:14:59.448 Write Zeroes Command: Supported 00:14:59.448 Set Features Save Field: Not Supported 00:14:59.448 Reservations: Supported 00:14:59.448 Timestamp: Not Supported 00:14:59.448 Copy: Supported 00:14:59.448 Volatile Write Cache: Present 00:14:59.448 Atomic Write Unit (Normal): 1 00:14:59.448 Atomic Write Unit (PFail): 1 00:14:59.448 Atomic Compare & Write Unit: 1 00:14:59.448 Fused Compare & Write: Supported 00:14:59.448 Scatter-Gather List 00:14:59.448 SGL Command Set: Supported 00:14:59.449 SGL Keyed: Supported 00:14:59.449 SGL Bit Bucket Descriptor: Not Supported 00:14:59.449 SGL Metadata Pointer: Not Supported 00:14:59.449 Oversized SGL: Not Supported 00:14:59.449 SGL Metadata Address: Not Supported 00:14:59.449 SGL Offset: Supported 00:14:59.449 Transport SGL Data Block: Not Supported 00:14:59.449 Replay Protected Memory Block: Not Supported 00:14:59.449 00:14:59.449 Firmware Slot Information 00:14:59.449 ========================= 00:14:59.449 Active slot: 1 00:14:59.449 Slot 1 Firmware Revision: 25.01 00:14:59.449 00:14:59.449 00:14:59.449 Commands Supported and Effects 00:14:59.449 ============================== 00:14:59.449 Admin Commands 00:14:59.449 -------------- 00:14:59.449 Get Log Page (02h): Supported 00:14:59.449 Identify (06h): Supported 00:14:59.449 Abort (08h): Supported 00:14:59.449 Set Features (09h): Supported 00:14:59.449 Get Features (0Ah): Supported 00:14:59.449 Asynchronous Event Request (0Ch): Supported 00:14:59.449 Keep Alive (18h): Supported 00:14:59.449 I/O Commands 00:14:59.449 ------------ 00:14:59.449 Flush (00h): Supported LBA-Change 00:14:59.449 Write (01h): Supported LBA-Change 00:14:59.449 Read (02h): Supported 00:14:59.449 Compare (05h): Supported 00:14:59.449 Write Zeroes (08h): Supported LBA-Change 00:14:59.449 Dataset Management (09h): Supported LBA-Change 00:14:59.449 Copy (19h): Supported LBA-Change 00:14:59.449 00:14:59.449 Error Log 00:14:59.449 ========= 00:14:59.449 00:14:59.449 Arbitration 00:14:59.449 =========== 00:14:59.449 Arbitration Burst: 1 00:14:59.449 00:14:59.449 Power Management 00:14:59.449 ================ 00:14:59.449 Number of Power States: 1 00:14:59.449 Current Power State: Power State #0 00:14:59.449 Power State #0: 00:14:59.449 Max Power: 0.00 W 00:14:59.449 Non-Operational State: Operational 00:14:59.449 Entry Latency: Not Reported 00:14:59.449 Exit Latency: Not Reported 00:14:59.449 Relative Read Throughput: 0 00:14:59.449 Relative Read Latency: 0 00:14:59.449 Relative Write Throughput: 0 00:14:59.449 Relative Write Latency: 0 00:14:59.449 Idle Power: Not Reported 00:14:59.449 Active Power: Not Reported 00:14:59.449 Non-Operational Permissive Mode: Not Supported 00:14:59.449 00:14:59.449 Health Information 00:14:59.449 ================== 00:14:59.449 Critical Warnings: 00:14:59.449 Available Spare Space: OK 00:14:59.449 Temperature: OK 00:14:59.449 Device Reliability: OK 00:14:59.449 Read Only: No 00:14:59.449 Volatile Memory Backup: OK 00:14:59.449 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:59.449 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:59.449 Available Spare: 0% 00:14:59.449 Available Spare Threshold: 0% 00:14:59.449 Life Percentage Used:[2024-11-20 08:27:46.745382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.449 [2024-11-20 08:27:46.745389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x674750) 00:14:59.449 [2024-11-20 08:27:46.745398] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.449 [2024-11-20 08:27:46.745428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d91c0, cid 7, qid 0 00:14:59.449 [2024-11-20 08:27:46.745503] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.449 [2024-11-20 08:27:46.745511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.449 [2024-11-20 08:27:46.745515] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.449 [2024-11-20 08:27:46.745519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d91c0) on tqpair=0x674750 00:14:59.449 [2024-11-20 08:27:46.745559] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:14:59.449 [2024-11-20 08:27:46.745586] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8740) on tqpair=0x674750 00:14:59.449 [2024-11-20 08:27:46.745594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.449 [2024-11-20 08:27:46.745599] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d88c0) on tqpair=0x674750 00:14:59.449 [2024-11-20 08:27:46.745604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.449 [2024-11-20 08:27:46.745609] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8a40) on tqpair=0x674750 00:14:59.449 [2024-11-20 08:27:46.745613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.449 [2024-11-20 08:27:46.745618] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.449 [2024-11-20 08:27:46.745623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.449 [2024-11-20 08:27:46.745632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.449 [2024-11-20 08:27:46.745637] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.449 [2024-11-20 08:27:46.745641] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.449 [2024-11-20 08:27:46.745648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.449 [2024-11-20 08:27:46.745671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.449 [2024-11-20 08:27:46.745728] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.449 [2024-11-20 08:27:46.745735] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.449 [2024-11-20 08:27:46.745739] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.449 [2024-11-20 08:27:46.745743] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.449 [2024-11-20 08:27:46.745751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.449 [2024-11-20 08:27:46.745755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.449 [2024-11-20 08:27:46.745759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.449 [2024-11-20 08:27:46.745766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.449 [2024-11-20 08:27:46.745787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.449 [2024-11-20 08:27:46.745930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.449 [2024-11-20 08:27:46.745939] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.449 [2024-11-20 08:27:46.745943] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.449 [2024-11-20 08:27:46.745947] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.449 [2024-11-20 08:27:46.745952] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:14:59.449 [2024-11-20 08:27:46.745957] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:14:59.449 [2024-11-20 08:27:46.745968] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.449 [2024-11-20 08:27:46.745973] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.449 [2024-11-20 08:27:46.745977] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.449 [2024-11-20 08:27:46.745985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.449 [2024-11-20 08:27:46.746005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.449 [2024-11-20 08:27:46.746071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.449 [2024-11-20 08:27:46.746078] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.449 [2024-11-20 08:27:46.746081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.449 [2024-11-20 08:27:46.746085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.450 [2024-11-20 08:27:46.746096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.746101] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.746105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.450 [2024-11-20 08:27:46.746112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.450 [2024-11-20 08:27:46.746129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.450 [2024-11-20 08:27:46.746194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.450 [2024-11-20 08:27:46.746200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.450 [2024-11-20 08:27:46.746204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.746208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.450 [2024-11-20 08:27:46.746218] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.746223] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.746226] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.450 [2024-11-20 08:27:46.746233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.450 [2024-11-20 08:27:46.746250] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.450 [2024-11-20 08:27:46.746307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.450 [2024-11-20 08:27:46.746314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.450 [2024-11-20 08:27:46.746334] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.746338] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.450 [2024-11-20 08:27:46.746348] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.746353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.746357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.450 [2024-11-20 08:27:46.746365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.450 [2024-11-20 08:27:46.746382] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.450 [2024-11-20 08:27:46.746440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.450 [2024-11-20 08:27:46.746446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.450 [2024-11-20 08:27:46.746450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.746454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.450 [2024-11-20 08:27:46.746465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.746469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.746473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.450 [2024-11-20 08:27:46.746481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.450 [2024-11-20 08:27:46.746498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.450 [2024-11-20 08:27:46.746562] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.450 [2024-11-20 08:27:46.746569] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.450 [2024-11-20 08:27:46.746572] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.746576] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.450 [2024-11-20 08:27:46.746587] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.746591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.746595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.450 [2024-11-20 08:27:46.746603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.450 [2024-11-20 08:27:46.746619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.450 [2024-11-20 08:27:46.746676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.450 [2024-11-20 08:27:46.746688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.450 [2024-11-20 08:27:46.746692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.746696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.450 [2024-11-20 08:27:46.746707] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.746712] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.746716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.450 [2024-11-20 08:27:46.746724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.450 [2024-11-20 08:27:46.746741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.450 [2024-11-20 08:27:46.746844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.450 [2024-11-20 08:27:46.746859] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.450 [2024-11-20 08:27:46.746864] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.746869] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.450 [2024-11-20 08:27:46.746880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.746885] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.746890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.450 [2024-11-20 08:27:46.746897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.450 [2024-11-20 08:27:46.746917] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.450 [2024-11-20 08:27:46.746980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.450 [2024-11-20 08:27:46.746995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.450 [2024-11-20 08:27:46.747000] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.747004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.450 [2024-11-20 08:27:46.747015] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.747020] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.747024] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.450 [2024-11-20 08:27:46.747032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.450 [2024-11-20 08:27:46.747051] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.450 [2024-11-20 08:27:46.747107] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.450 [2024-11-20 08:27:46.747118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.450 [2024-11-20 08:27:46.747122] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.747126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.450 [2024-11-20 08:27:46.747137] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.747142] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.747146] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.450 [2024-11-20 08:27:46.747154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.450 [2024-11-20 08:27:46.747186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.450 [2024-11-20 08:27:46.747241] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.450 [2024-11-20 08:27:46.747252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.450 [2024-11-20 08:27:46.747256] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.747261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.450 [2024-11-20 08:27:46.747271] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.747276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.747280] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.450 [2024-11-20 08:27:46.747287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.450 [2024-11-20 08:27:46.747304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.450 [2024-11-20 08:27:46.747364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.450 [2024-11-20 08:27:46.747375] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.450 [2024-11-20 08:27:46.747379] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.747383] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.450 [2024-11-20 08:27:46.747394] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.747398] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.450 [2024-11-20 08:27:46.747402] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.450 [2024-11-20 08:27:46.747410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.450 [2024-11-20 08:27:46.747427] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.450 [2024-11-20 08:27:46.747495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.450 [2024-11-20 08:27:46.747502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.450 [2024-11-20 08:27:46.747506] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.747510] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.451 [2024-11-20 08:27:46.747520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.747525] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.747529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.451 [2024-11-20 08:27:46.747563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.451 [2024-11-20 08:27:46.747582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.451 [2024-11-20 08:27:46.747638] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.451 [2024-11-20 08:27:46.747645] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.451 [2024-11-20 08:27:46.747649] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.747653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.451 [2024-11-20 08:27:46.747663] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.747668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.747672] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.451 [2024-11-20 08:27:46.747680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.451 [2024-11-20 08:27:46.747697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.451 [2024-11-20 08:27:46.747750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.451 [2024-11-20 08:27:46.747758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.451 [2024-11-20 08:27:46.747762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.747766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.451 [2024-11-20 08:27:46.747776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.747781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.747785] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.451 [2024-11-20 08:27:46.747792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.451 [2024-11-20 08:27:46.747809] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.451 [2024-11-20 08:27:46.747911] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.451 [2024-11-20 08:27:46.747920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.451 [2024-11-20 08:27:46.747923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.747927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.451 [2024-11-20 08:27:46.747938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.747943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.747947] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.451 [2024-11-20 08:27:46.747954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.451 [2024-11-20 08:27:46.747974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.451 [2024-11-20 08:27:46.748035] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.451 [2024-11-20 08:27:46.748042] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.451 [2024-11-20 08:27:46.748046] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.748050] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.451 [2024-11-20 08:27:46.748061] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.748066] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.748069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.451 [2024-11-20 08:27:46.748077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.451 [2024-11-20 08:27:46.748094] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.451 [2024-11-20 08:27:46.748159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.451 [2024-11-20 08:27:46.748166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.451 [2024-11-20 08:27:46.748169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.748173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.451 [2024-11-20 08:27:46.748184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.748188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.748192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.451 [2024-11-20 08:27:46.748200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.451 [2024-11-20 08:27:46.748216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.451 [2024-11-20 08:27:46.748278] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.451 [2024-11-20 08:27:46.748285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.451 [2024-11-20 08:27:46.748288] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.748292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.451 [2024-11-20 08:27:46.748303] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.748307] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.748311] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.451 [2024-11-20 08:27:46.748318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.451 [2024-11-20 08:27:46.748335] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.451 [2024-11-20 08:27:46.748396] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.451 [2024-11-20 08:27:46.748407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.451 [2024-11-20 08:27:46.748411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.748416] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.451 [2024-11-20 08:27:46.748427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.748432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.748435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.451 [2024-11-20 08:27:46.748443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.451 [2024-11-20 08:27:46.748461] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.451 [2024-11-20 08:27:46.748517] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.451 [2024-11-20 08:27:46.748524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.451 [2024-11-20 08:27:46.748528] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.748532] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.451 [2024-11-20 08:27:46.748542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.748547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.748551] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.451 [2024-11-20 08:27:46.748558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.451 [2024-11-20 08:27:46.748591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.451 [2024-11-20 08:27:46.748655] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.451 [2024-11-20 08:27:46.748662] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.451 [2024-11-20 08:27:46.748666] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.748670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.451 [2024-11-20 08:27:46.748681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.748686] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.748690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.451 [2024-11-20 08:27:46.748698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.451 [2024-11-20 08:27:46.748716] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.451 [2024-11-20 08:27:46.748776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.451 [2024-11-20 08:27:46.748783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.451 [2024-11-20 08:27:46.748787] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.748791] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.451 [2024-11-20 08:27:46.748802] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.748807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:59.451 [2024-11-20 08:27:46.748811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x674750) 00:14:59.451 [2024-11-20 08:27:46.752858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.451 [2024-11-20 08:27:46.752918] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6d8bc0, cid 3, qid 0 00:14:59.452 [2024-11-20 08:27:46.752993] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:59.452 [2024-11-20 08:27:46.753000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:59.452 [2024-11-20 08:27:46.753004] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:59.452 [2024-11-20 08:27:46.753008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6d8bc0) on tqpair=0x674750 00:14:59.452 [2024-11-20 08:27:46.753017] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:14:59.452 0% 00:14:59.452 Data Units Read: 0 00:14:59.452 Data Units Written: 0 00:14:59.452 Host Read Commands: 0 00:14:59.452 Host Write Commands: 0 00:14:59.452 Controller Busy Time: 0 minutes 00:14:59.452 Power Cycles: 0 00:14:59.452 Power On Hours: 0 hours 00:14:59.452 Unsafe Shutdowns: 0 00:14:59.452 Unrecoverable Media Errors: 0 00:14:59.452 Lifetime Error Log Entries: 0 00:14:59.452 Warning Temperature Time: 0 minutes 00:14:59.452 Critical Temperature Time: 0 minutes 00:14:59.452 00:14:59.452 Number of Queues 00:14:59.452 ================ 00:14:59.452 Number of I/O Submission Queues: 127 00:14:59.452 Number of I/O Completion Queues: 127 00:14:59.452 00:14:59.452 Active Namespaces 00:14:59.452 ================= 00:14:59.452 Namespace ID:1 00:14:59.452 Error Recovery Timeout: Unlimited 00:14:59.452 Command Set Identifier: NVM (00h) 00:14:59.452 Deallocate: Supported 00:14:59.452 Deallocated/Unwritten Error: Not Supported 00:14:59.452 Deallocated Read Value: Unknown 00:14:59.452 Deallocate in Write Zeroes: Not Supported 00:14:59.452 Deallocated Guard Field: 0xFFFF 00:14:59.452 Flush: Supported 00:14:59.452 Reservation: Supported 00:14:59.452 Namespace Sharing Capabilities: Multiple Controllers 00:14:59.452 Size (in LBAs): 131072 (0GiB) 00:14:59.452 Capacity (in LBAs): 131072 (0GiB) 00:14:59.452 Utilization (in LBAs): 131072 (0GiB) 00:14:59.452 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:59.452 EUI64: ABCDEF0123456789 00:14:59.452 UUID: c09fd871-ff3d-4e6e-8f4e-befaa1c3d973 00:14:59.452 Thin Provisioning: Not Supported 00:14:59.452 Per-NS Atomic Units: Yes 00:14:59.452 Atomic Boundary Size (Normal): 0 00:14:59.452 Atomic Boundary Size (PFail): 0 00:14:59.452 Atomic Boundary Offset: 0 00:14:59.452 Maximum Single Source Range Length: 65535 00:14:59.452 Maximum Copy Length: 65535 00:14:59.452 Maximum Source Range Count: 1 00:14:59.452 NGUID/EUI64 Never Reused: No 00:14:59.452 Namespace Write Protected: No 00:14:59.452 Number of LBA Formats: 1 00:14:59.452 Current LBA Format: LBA Format #00 00:14:59.452 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:59.452 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@566 -- # xtrace_disable 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:59.452 rmmod nvme_tcp 00:14:59.452 rmmod nvme_fabrics 00:14:59.452 rmmod nvme_keyring 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74177 ']' 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74177 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' -z 74177 ']' 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@961 -- # kill -0 74177 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # uname 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 74177 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:14:59.452 killing process with pid 74177 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@975 -- # echo 'killing process with pid 74177' 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # kill 74177 00:14:59.452 08:27:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@981 -- # wait 74177 00:14:59.712 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:59.712 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:59.712 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:59.712 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:14:59.712 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:14:59.712 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:59.712 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:14:59.712 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:59.712 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:59.712 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:59.712 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:59.712 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:59.971 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:59.971 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:59.971 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:59.971 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:59.971 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:59.971 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:59.971 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:59.971 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:59.971 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:59.971 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:59.971 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:59.971 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.971 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:59.971 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.971 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:14:59.971 00:14:59.971 real 0m2.510s 00:14:59.971 user 0m5.008s 00:14:59.971 sys 0m0.807s 00:14:59.971 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1133 -- # xtrace_disable 00:14:59.971 ************************************ 00:14:59.971 END TEST nvmf_identify 00:14:59.971 ************************************ 00:14:59.971 08:27:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:00.231 08:27:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:00.231 08:27:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:15:00.231 08:27:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1114 -- # xtrace_disable 00:15:00.231 08:27:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:00.231 ************************************ 00:15:00.231 START TEST nvmf_perf 00:15:00.231 ************************************ 00:15:00.231 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:00.231 * Looking for test storage... 00:15:00.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:00.231 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:15:00.231 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1638 -- # lcov --version 00:15:00.231 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:15:00.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.492 --rc genhtml_branch_coverage=1 00:15:00.492 --rc genhtml_function_coverage=1 00:15:00.492 --rc genhtml_legend=1 00:15:00.492 --rc geninfo_all_blocks=1 00:15:00.492 --rc geninfo_unexecuted_blocks=1 00:15:00.492 00:15:00.492 ' 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:15:00.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.492 --rc genhtml_branch_coverage=1 00:15:00.492 --rc genhtml_function_coverage=1 00:15:00.492 --rc genhtml_legend=1 00:15:00.492 --rc geninfo_all_blocks=1 00:15:00.492 --rc geninfo_unexecuted_blocks=1 00:15:00.492 00:15:00.492 ' 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:15:00.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.492 --rc genhtml_branch_coverage=1 00:15:00.492 --rc genhtml_function_coverage=1 00:15:00.492 --rc genhtml_legend=1 00:15:00.492 --rc geninfo_all_blocks=1 00:15:00.492 --rc geninfo_unexecuted_blocks=1 00:15:00.492 00:15:00.492 ' 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:15:00.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.492 --rc genhtml_branch_coverage=1 00:15:00.492 --rc genhtml_function_coverage=1 00:15:00.492 --rc genhtml_legend=1 00:15:00.492 --rc geninfo_all_blocks=1 00:15:00.492 --rc geninfo_unexecuted_blocks=1 00:15:00.492 00:15:00.492 ' 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.492 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:00.493 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:00.493 Cannot find device "nvmf_init_br" 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:00.493 Cannot find device "nvmf_init_br2" 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:00.493 Cannot find device "nvmf_tgt_br" 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:00.493 Cannot find device "nvmf_tgt_br2" 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:15:00.493 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:00.493 Cannot find device "nvmf_init_br" 00:15:00.494 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:15:00.494 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:00.494 Cannot find device "nvmf_init_br2" 00:15:00.494 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:15:00.494 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:00.494 Cannot find device "nvmf_tgt_br" 00:15:00.494 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:15:00.494 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:00.494 Cannot find device "nvmf_tgt_br2" 00:15:00.494 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:15:00.494 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:00.494 Cannot find device "nvmf_br" 00:15:00.494 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:15:00.494 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:00.494 Cannot find device "nvmf_init_if" 00:15:00.494 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:15:00.494 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:00.494 Cannot find device "nvmf_init_if2" 00:15:00.494 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:15:00.494 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:00.494 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:00.494 08:27:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:15:00.494 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:00.494 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:00.494 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:15:00.494 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:00.494 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:00.494 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:00.494 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:00.494 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:00.494 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:00.753 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:00.753 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:00.754 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:00.754 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:15:00.754 00:15:00.754 --- 10.0.0.3 ping statistics --- 00:15:00.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.754 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:00.754 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:00.754 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:15:00.754 00:15:00.754 --- 10.0.0.4 ping statistics --- 00:15:00.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.754 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:00.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:15:00.754 00:15:00.754 --- 10.0.0.1 ping statistics --- 00:15:00.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.754 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:00.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:15:00.754 00:15:00.754 --- 10.0.0.2 ping statistics --- 00:15:00.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.754 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74446 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74446 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # '[' -z 74446 ']' 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:00.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@843 -- # local max_retries=100 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@847 -- # xtrace_disable 00:15:00.754 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:01.013 [2024-11-20 08:27:48.332075] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:15:01.013 [2024-11-20 08:27:48.332242] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.013 [2024-11-20 08:27:48.482756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:01.013 [2024-11-20 08:27:48.569813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.013 [2024-11-20 08:27:48.569892] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.013 [2024-11-20 08:27:48.569906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.013 [2024-11-20 08:27:48.569917] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.013 [2024-11-20 08:27:48.569926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.013 [2024-11-20 08:27:48.571505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.014 [2024-11-20 08:27:48.571665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.014 [2024-11-20 08:27:48.571763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:01.014 [2024-11-20 08:27:48.571768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.272 [2024-11-20 08:27:48.654459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:01.272 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:15:01.272 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@871 -- # return 0 00:15:01.272 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:01.272 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@735 -- # xtrace_disable 00:15:01.272 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:01.273 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.273 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:01.273 08:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:01.842 08:27:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:01.842 08:27:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:02.107 08:27:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:02.107 08:27:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:02.674 08:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:02.674 08:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:02.674 08:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:02.674 08:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:02.674 08:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:02.932 [2024-11-20 08:27:50.328457] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.932 08:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:03.190 08:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:03.190 08:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:03.448 08:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:03.448 08:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:03.707 08:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:03.966 [2024-11-20 08:27:51.468343] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:03.966 08:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:04.224 08:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:04.224 08:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:04.224 08:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:04.224 08:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:05.599 Initializing NVMe Controllers 00:15:05.599 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:05.599 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:05.599 Initialization complete. Launching workers. 00:15:05.599 ======================================================== 00:15:05.599 Latency(us) 00:15:05.599 Device Information : IOPS MiB/s Average min max 00:15:05.599 PCIE (0000:00:10.0) NSID 1 from core 0: 21088.00 82.38 1517.14 410.38 7801.82 00:15:05.599 ======================================================== 00:15:05.599 Total : 21088.00 82.38 1517.14 410.38 7801.82 00:15:05.599 00:15:05.599 08:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:06.974 Initializing NVMe Controllers 00:15:06.974 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:06.974 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:06.974 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:06.974 Initialization complete. Launching workers. 00:15:06.974 ======================================================== 00:15:06.974 Latency(us) 00:15:06.974 Device Information : IOPS MiB/s Average min max 00:15:06.974 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2658.81 10.39 375.74 125.89 7195.45 00:15:06.974 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.76 0.48 8145.59 5981.46 12039.80 00:15:06.974 ======================================================== 00:15:06.974 Total : 2781.57 10.87 718.65 125.89 12039.80 00:15:06.974 00:15:06.975 08:27:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:08.386 Initializing NVMe Controllers 00:15:08.386 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:08.386 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:08.386 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:08.386 Initialization complete. Launching workers. 00:15:08.386 ======================================================== 00:15:08.386 Latency(us) 00:15:08.386 Device Information : IOPS MiB/s Average min max 00:15:08.386 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8033.98 31.38 3985.13 748.61 11362.67 00:15:08.386 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3682.99 14.39 8737.15 5028.50 16749.04 00:15:08.386 ======================================================== 00:15:08.386 Total : 11716.98 45.77 5478.83 748.61 16749.04 00:15:08.386 00:15:08.386 08:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:08.386 08:27:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:10.921 Initializing NVMe Controllers 00:15:10.922 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:10.922 Controller IO queue size 128, less than required. 00:15:10.922 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:10.922 Controller IO queue size 128, less than required. 00:15:10.922 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:10.922 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:10.922 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:10.922 Initialization complete. Launching workers. 00:15:10.922 ======================================================== 00:15:10.922 Latency(us) 00:15:10.922 Device Information : IOPS MiB/s Average min max 00:15:10.922 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1428.93 357.23 91394.16 46652.33 143080.00 00:15:10.922 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 614.47 153.62 211213.70 77094.71 344052.04 00:15:10.922 ======================================================== 00:15:10.922 Total : 2043.40 510.85 127425.04 46652.33 344052.04 00:15:10.922 00:15:10.922 08:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:15:11.181 Initializing NVMe Controllers 00:15:11.181 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:11.181 Controller IO queue size 128, less than required. 00:15:11.181 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:11.181 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:11.181 Controller IO queue size 128, less than required. 00:15:11.181 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:11.181 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:11.181 WARNING: Some requested NVMe devices were skipped 00:15:11.181 No valid NVMe controllers or AIO or URING devices found 00:15:11.181 08:27:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:15:13.714 Initializing NVMe Controllers 00:15:13.714 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:13.714 Controller IO queue size 128, less than required. 00:15:13.714 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:13.714 Controller IO queue size 128, less than required. 00:15:13.714 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:13.714 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:13.714 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:13.714 Initialization complete. Launching workers. 00:15:13.714 00:15:13.714 ==================== 00:15:13.714 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:13.714 TCP transport: 00:15:13.714 polls: 9540 00:15:13.714 idle_polls: 5976 00:15:13.714 sock_completions: 3564 00:15:13.714 nvme_completions: 5297 00:15:13.714 submitted_requests: 7946 00:15:13.714 queued_requests: 1 00:15:13.714 00:15:13.714 ==================== 00:15:13.714 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:13.714 TCP transport: 00:15:13.714 polls: 9739 00:15:13.714 idle_polls: 6359 00:15:13.714 sock_completions: 3380 00:15:13.714 nvme_completions: 5661 00:15:13.714 submitted_requests: 8440 00:15:13.714 queued_requests: 1 00:15:13.714 ======================================================== 00:15:13.714 Latency(us) 00:15:13.714 Device Information : IOPS MiB/s Average min max 00:15:13.714 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1321.15 330.29 99332.53 46053.44 167934.35 00:15:13.714 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1411.95 352.99 91632.50 49717.54 150068.33 00:15:13.714 ======================================================== 00:15:13.714 Total : 2733.10 683.27 95354.60 46053.44 167934.35 00:15:13.714 00:15:13.714 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:13.714 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:13.972 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:13.972 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:13.972 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:13.972 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:13.972 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:15:13.972 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:13.973 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:15:13.973 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:13.973 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:13.973 rmmod nvme_tcp 00:15:14.231 rmmod nvme_fabrics 00:15:14.231 rmmod nvme_keyring 00:15:14.231 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:14.231 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:15:14.231 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:15:14.231 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74446 ']' 00:15:14.231 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74446 00:15:14.231 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' -z 74446 ']' 00:15:14.231 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@961 -- # kill -0 74446 00:15:14.231 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # uname 00:15:14.231 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:15:14.231 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 74446 00:15:14.231 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:15:14.231 killing process with pid 74446 00:15:14.231 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:15:14.231 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@975 -- # echo 'killing process with pid 74446' 00:15:14.231 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # kill 74446 00:15:14.231 08:28:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@981 -- # wait 74446 00:15:15.663 08:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:15.663 08:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:15.663 08:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:15.663 08:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:15:15.664 08:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:15:15.664 08:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:15:15.664 08:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:15.664 08:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:15.664 08:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:15.664 08:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:15.664 08:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:15.664 08:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:15.664 08:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:15.664 08:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:15.664 08:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:15.664 08:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:15.664 08:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:15.664 08:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:15.664 08:28:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:15.664 08:28:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:15.664 08:28:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:15.664 08:28:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:15.664 08:28:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:15.664 08:28:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.664 08:28:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:15.664 08:28:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.664 08:28:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:15:15.664 00:15:15.664 real 0m15.578s 00:15:15.664 user 0m56.477s 00:15:15.664 sys 0m4.248s 00:15:15.664 08:28:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1133 -- # xtrace_disable 00:15:15.664 08:28:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:15.664 ************************************ 00:15:15.664 END TEST nvmf_perf 00:15:15.664 ************************************ 00:15:15.664 08:28:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:15.664 08:28:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:15:15.664 08:28:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1114 -- # xtrace_disable 00:15:15.664 08:28:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:15.664 ************************************ 00:15:15.664 START TEST nvmf_fio_host 00:15:15.664 ************************************ 00:15:15.664 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:15.922 * Looking for test storage... 00:15:15.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1638 -- # lcov --version 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:15:15.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.922 --rc genhtml_branch_coverage=1 00:15:15.922 --rc genhtml_function_coverage=1 00:15:15.922 --rc genhtml_legend=1 00:15:15.922 --rc geninfo_all_blocks=1 00:15:15.922 --rc geninfo_unexecuted_blocks=1 00:15:15.922 00:15:15.922 ' 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:15:15.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.922 --rc genhtml_branch_coverage=1 00:15:15.922 --rc genhtml_function_coverage=1 00:15:15.922 --rc genhtml_legend=1 00:15:15.922 --rc geninfo_all_blocks=1 00:15:15.922 --rc geninfo_unexecuted_blocks=1 00:15:15.922 00:15:15.922 ' 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:15:15.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.922 --rc genhtml_branch_coverage=1 00:15:15.922 --rc genhtml_function_coverage=1 00:15:15.922 --rc genhtml_legend=1 00:15:15.922 --rc geninfo_all_blocks=1 00:15:15.922 --rc geninfo_unexecuted_blocks=1 00:15:15.922 00:15:15.922 ' 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:15:15.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.922 --rc genhtml_branch_coverage=1 00:15:15.922 --rc genhtml_function_coverage=1 00:15:15.922 --rc genhtml_legend=1 00:15:15.922 --rc geninfo_all_blocks=1 00:15:15.922 --rc geninfo_unexecuted_blocks=1 00:15:15.922 00:15:15.922 ' 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.922 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:15.923 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:15.923 Cannot find device "nvmf_init_br" 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:15.923 Cannot find device "nvmf_init_br2" 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:15.923 Cannot find device "nvmf_tgt_br" 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:15.923 Cannot find device "nvmf_tgt_br2" 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:15:15.923 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:16.181 Cannot find device "nvmf_init_br" 00:15:16.181 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:15:16.181 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:16.181 Cannot find device "nvmf_init_br2" 00:15:16.181 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:15:16.181 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:16.181 Cannot find device "nvmf_tgt_br" 00:15:16.181 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:15:16.181 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:16.181 Cannot find device "nvmf_tgt_br2" 00:15:16.181 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:15:16.181 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:16.181 Cannot find device "nvmf_br" 00:15:16.181 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:15:16.181 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:16.181 Cannot find device "nvmf_init_if" 00:15:16.181 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:16.182 Cannot find device "nvmf_init_if2" 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:16.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:16.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:16.182 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:16.440 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:16.440 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:16.440 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:16.440 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:16.440 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:16.440 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:16.440 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:16.440 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:16.440 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:16.440 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:15:16.440 00:15:16.440 --- 10.0.0.3 ping statistics --- 00:15:16.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.440 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:15:16.440 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:16.441 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:16.441 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.120 ms 00:15:16.441 00:15:16.441 --- 10.0.0.4 ping statistics --- 00:15:16.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.441 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:16.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:16.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:15:16.441 00:15:16.441 --- 10.0.0.1 ping statistics --- 00:15:16.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.441 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:16.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:16.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:15:16.441 00:15:16.441 --- 10.0.0.2 ping statistics --- 00:15:16.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.441 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:16.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74919 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74919 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # '[' -z 74919 ']' 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@843 -- # local max_retries=100 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@847 -- # xtrace_disable 00:15:16.441 08:28:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:16.441 [2024-11-20 08:28:03.895909] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:15:16.441 [2024-11-20 08:28:03.896293] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.699 [2024-11-20 08:28:04.052220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:16.699 [2024-11-20 08:28:04.123818] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.699 [2024-11-20 08:28:04.124140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.699 [2024-11-20 08:28:04.124315] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:16.699 [2024-11-20 08:28:04.124471] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:16.699 [2024-11-20 08:28:04.124516] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.699 [2024-11-20 08:28:04.126091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.699 [2024-11-20 08:28:04.126489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.699 [2024-11-20 08:28:04.126614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:16.699 [2024-11-20 08:28:04.126619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.699 [2024-11-20 08:28:04.204105] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:16.957 08:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:15:16.957 08:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@871 -- # return 0 00:15:16.957 08:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:17.215 [2024-11-20 08:28:04.580113] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:17.215 08:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:17.215 08:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@735 -- # xtrace_disable 00:15:17.215 08:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.215 08:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:17.473 Malloc1 00:15:17.473 08:28:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:17.732 08:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:17.991 08:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:18.250 [2024-11-20 08:28:05.750065] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:18.250 08:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:18.509 08:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:18.509 08:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:18.509 08:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:18.509 08:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1329 -- # local fio_dir=/usr/src/fio 00:15:18.509 08:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1331 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:18.509 08:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1331 -- # local sanitizers 00:15:18.509 08:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1332 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:18.509 08:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1333 -- # shift 00:15:18.509 08:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local asan_lib= 00:15:18.509 08:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:15:18.509 08:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:18.509 08:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # grep libasan 00:15:18.509 08:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:15:18.509 08:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # asan_lib= 00:15:18.509 08:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # [[ -n '' ]] 00:15:18.509 08:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:15:18.509 08:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # grep libclang_rt.asan 00:15:18.509 08:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:18.509 08:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:15:18.767 08:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # asan_lib= 00:15:18.767 08:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # [[ -n '' ]] 00:15:18.767 08:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:18.767 08:28:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:18.767 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:18.767 fio-3.35 00:15:18.767 Starting 1 thread 00:15:21.300 00:15:21.300 test: (groupid=0, jobs=1): err= 0: pid=74990: Wed Nov 20 08:28:08 2024 00:15:21.300 read: IOPS=7654, BW=29.9MiB/s (31.4MB/s)(60.0MiB/2008msec) 00:15:21.300 slat (nsec): min=1791, max=295741, avg=2433.42, stdev=3683.66 00:15:21.300 clat (usec): min=2452, max=14924, avg=8739.64, stdev=742.89 00:15:21.300 lat (usec): min=2505, max=14926, avg=8742.07, stdev=742.59 00:15:21.300 clat percentiles (usec): 00:15:21.300 | 1.00th=[ 7308], 5.00th=[ 7767], 10.00th=[ 7963], 20.00th=[ 8160], 00:15:21.300 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8848], 00:15:21.300 | 70.00th=[ 9110], 80.00th=[ 9241], 90.00th=[ 9634], 95.00th=[ 9896], 00:15:21.300 | 99.00th=[10814], 99.50th=[11338], 99.90th=[13435], 99.95th=[14484], 00:15:21.300 | 99.99th=[14877] 00:15:21.300 bw ( KiB/s): min=29504, max=31272, per=99.94%, avg=30600.00, stdev=773.84, samples=4 00:15:21.300 iops : min= 7376, max= 7818, avg=7650.00, stdev=193.46, samples=4 00:15:21.300 write: IOPS=7632, BW=29.8MiB/s (31.3MB/s)(59.9MiB/2008msec); 0 zone resets 00:15:21.300 slat (nsec): min=1850, max=255700, avg=2481.92, stdev=2710.14 00:15:21.300 clat (usec): min=2323, max=14732, avg=7940.68, stdev=694.58 00:15:21.300 lat (usec): min=2336, max=14734, avg=7943.16, stdev=694.44 00:15:21.300 clat percentiles (usec): 00:15:21.300 | 1.00th=[ 6587], 5.00th=[ 7046], 10.00th=[ 7177], 20.00th=[ 7439], 00:15:21.300 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8029], 00:15:21.300 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:15:21.300 | 99.00th=[ 9765], 99.50th=[10552], 99.90th=[13173], 99.95th=[13698], 00:15:21.300 | 99.99th=[14615] 00:15:21.300 bw ( KiB/s): min=30016, max=31016, per=100.00%, avg=30540.00, stdev=417.25, samples=4 00:15:21.300 iops : min= 7504, max= 7754, avg=7635.00, stdev=104.31, samples=4 00:15:21.300 lat (msec) : 4=0.11%, 10=97.43%, 20=2.46% 00:15:21.300 cpu : usr=71.65%, sys=22.17%, ctx=8, majf=0, minf=7 00:15:21.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:21.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:21.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:21.300 issued rwts: total=15370,15327,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:21.300 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:21.300 00:15:21.300 Run status group 0 (all jobs): 00:15:21.300 READ: bw=29.9MiB/s (31.4MB/s), 29.9MiB/s-29.9MiB/s (31.4MB/s-31.4MB/s), io=60.0MiB (63.0MB), run=2008-2008msec 00:15:21.300 WRITE: bw=29.8MiB/s (31.3MB/s), 29.8MiB/s-29.8MiB/s (31.3MB/s-31.3MB/s), io=59.9MiB (62.8MB), run=2008-2008msec 00:15:21.300 08:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:21.300 08:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:21.300 08:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1329 -- # local fio_dir=/usr/src/fio 00:15:21.300 08:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1331 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:21.300 08:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1331 -- # local sanitizers 00:15:21.300 08:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1332 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:21.300 08:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1333 -- # shift 00:15:21.300 08:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local asan_lib= 00:15:21.300 08:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:15:21.300 08:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:21.300 08:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:15:21.300 08:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # grep libasan 00:15:21.300 08:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # asan_lib= 00:15:21.300 08:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # [[ -n '' ]] 00:15:21.300 08:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:15:21.300 08:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # grep libclang_rt.asan 00:15:21.300 08:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:21.300 08:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:15:21.300 08:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # asan_lib= 00:15:21.300 08:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # [[ -n '' ]] 00:15:21.300 08:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:21.300 08:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:21.300 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:21.300 fio-3.35 00:15:21.300 Starting 1 thread 00:15:23.847 00:15:23.847 test: (groupid=0, jobs=1): err= 0: pid=75039: Wed Nov 20 08:28:11 2024 00:15:23.847 read: IOPS=8017, BW=125MiB/s (131MB/s)(251MiB/2007msec) 00:15:23.847 slat (usec): min=2, max=124, avg= 3.56, stdev= 2.34 00:15:23.847 clat (usec): min=2225, max=17729, avg=9049.10, stdev=2691.89 00:15:23.847 lat (usec): min=2229, max=17732, avg=9052.66, stdev=2691.90 00:15:23.847 clat percentiles (usec): 00:15:23.847 | 1.00th=[ 4113], 5.00th=[ 5014], 10.00th=[ 5604], 20.00th=[ 6652], 00:15:23.847 | 30.00th=[ 7439], 40.00th=[ 8094], 50.00th=[ 8848], 60.00th=[ 9634], 00:15:23.847 | 70.00th=[10421], 80.00th=[11338], 90.00th=[12649], 95.00th=[13829], 00:15:23.847 | 99.00th=[16057], 99.50th=[16581], 99.90th=[17171], 99.95th=[17433], 00:15:23.847 | 99.99th=[17695] 00:15:23.847 bw ( KiB/s): min=56672, max=71232, per=50.84%, avg=65224.00, stdev=6461.43, samples=4 00:15:23.847 iops : min= 3542, max= 4452, avg=4076.50, stdev=403.84, samples=4 00:15:23.847 write: IOPS=4654, BW=72.7MiB/s (76.3MB/s)(134MiB/1837msec); 0 zone resets 00:15:23.847 slat (usec): min=31, max=326, avg=37.39, stdev= 9.49 00:15:23.847 clat (usec): min=4968, max=20879, avg=12300.97, stdev=2438.57 00:15:23.847 lat (usec): min=5001, max=20911, avg=12338.36, stdev=2439.29 00:15:23.847 clat percentiles (usec): 00:15:23.847 | 1.00th=[ 7963], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10159], 00:15:23.847 | 30.00th=[10814], 40.00th=[11469], 50.00th=[11994], 60.00th=[12518], 00:15:23.847 | 70.00th=[13435], 80.00th=[14484], 90.00th=[15795], 95.00th=[16712], 00:15:23.847 | 99.00th=[18220], 99.50th=[18482], 99.90th=[20055], 99.95th=[20055], 00:15:23.847 | 99.99th=[20841] 00:15:23.847 bw ( KiB/s): min=60512, max=74976, per=91.51%, avg=68152.00, stdev=6533.61, samples=4 00:15:23.847 iops : min= 3782, max= 4686, avg=4259.50, stdev=408.35, samples=4 00:15:23.847 lat (msec) : 4=0.47%, 10=48.37%, 20=51.13%, 50=0.04% 00:15:23.847 cpu : usr=81.06%, sys=15.00%, ctx=5, majf=0, minf=12 00:15:23.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:15:23.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:23.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:23.847 issued rwts: total=16092,8551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:23.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:23.847 00:15:23.847 Run status group 0 (all jobs): 00:15:23.847 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=251MiB (264MB), run=2007-2007msec 00:15:23.847 WRITE: bw=72.7MiB/s (76.3MB/s), 72.7MiB/s-72.7MiB/s (76.3MB/s-76.3MB/s), io=134MiB (140MB), run=1837-1837msec 00:15:23.847 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:23.847 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:23.847 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:23.847 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:23.847 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:23.847 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:23.847 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:15:23.847 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:23.847 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:15:23.847 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:23.847 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:24.107 rmmod nvme_tcp 00:15:24.107 rmmod nvme_fabrics 00:15:24.107 rmmod nvme_keyring 00:15:24.107 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:24.107 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:15:24.107 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:15:24.107 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74919 ']' 00:15:24.107 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74919 00:15:24.107 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' -z 74919 ']' 00:15:24.107 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@961 -- # kill -0 74919 00:15:24.107 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # uname 00:15:24.107 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:15:24.107 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 74919 00:15:24.107 killing process with pid 74919 00:15:24.107 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:15:24.107 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:15:24.107 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@975 -- # echo 'killing process with pid 74919' 00:15:24.107 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # kill 74919 00:15:24.107 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@981 -- # wait 74919 00:15:24.365 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:24.365 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:24.365 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:24.365 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:15:24.365 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:15:24.365 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:15:24.365 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:24.365 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:24.365 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:24.365 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:24.365 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:24.365 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:24.365 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:24.365 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:24.365 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:24.365 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:24.365 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:24.365 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:24.365 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:24.624 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:24.624 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:24.624 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:24.624 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:24.624 08:28:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.624 08:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:24.624 08:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.624 08:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:15:24.624 ************************************ 00:15:24.624 END TEST nvmf_fio_host 00:15:24.624 ************************************ 00:15:24.624 00:15:24.624 real 0m8.854s 00:15:24.624 user 0m34.876s 00:15:24.624 sys 0m2.548s 00:15:24.624 08:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1133 -- # xtrace_disable 00:15:24.624 08:28:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.624 08:28:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:24.624 08:28:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:15:24.624 08:28:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1114 -- # xtrace_disable 00:15:24.624 08:28:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.624 ************************************ 00:15:24.624 START TEST nvmf_failover 00:15:24.624 ************************************ 00:15:24.624 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:24.885 * Looking for test storage... 00:15:24.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1638 -- # lcov --version 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:15:24.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.885 --rc genhtml_branch_coverage=1 00:15:24.885 --rc genhtml_function_coverage=1 00:15:24.885 --rc genhtml_legend=1 00:15:24.885 --rc geninfo_all_blocks=1 00:15:24.885 --rc geninfo_unexecuted_blocks=1 00:15:24.885 00:15:24.885 ' 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:15:24.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.885 --rc genhtml_branch_coverage=1 00:15:24.885 --rc genhtml_function_coverage=1 00:15:24.885 --rc genhtml_legend=1 00:15:24.885 --rc geninfo_all_blocks=1 00:15:24.885 --rc geninfo_unexecuted_blocks=1 00:15:24.885 00:15:24.885 ' 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:15:24.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.885 --rc genhtml_branch_coverage=1 00:15:24.885 --rc genhtml_function_coverage=1 00:15:24.885 --rc genhtml_legend=1 00:15:24.885 --rc geninfo_all_blocks=1 00:15:24.885 --rc geninfo_unexecuted_blocks=1 00:15:24.885 00:15:24.885 ' 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:15:24.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.885 --rc genhtml_branch_coverage=1 00:15:24.885 --rc genhtml_function_coverage=1 00:15:24.885 --rc genhtml_legend=1 00:15:24.885 --rc geninfo_all_blocks=1 00:15:24.885 --rc geninfo_unexecuted_blocks=1 00:15:24.885 00:15:24.885 ' 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.885 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:24.886 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:24.886 Cannot find device "nvmf_init_br" 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:24.886 Cannot find device "nvmf_init_br2" 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:24.886 Cannot find device "nvmf_tgt_br" 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:24.886 Cannot find device "nvmf_tgt_br2" 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:24.886 Cannot find device "nvmf_init_br" 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:24.886 Cannot find device "nvmf_init_br2" 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:15:24.886 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:24.886 Cannot find device "nvmf_tgt_br" 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:25.146 Cannot find device "nvmf_tgt_br2" 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:25.146 Cannot find device "nvmf_br" 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:25.146 Cannot find device "nvmf_init_if" 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:25.146 Cannot find device "nvmf_init_if2" 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:25.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:25.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:25.146 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:25.405 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:25.405 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:25.405 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:25.405 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:25.405 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:25.405 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:25.405 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:15:25.405 00:15:25.405 --- 10.0.0.3 ping statistics --- 00:15:25.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.405 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:15:25.405 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:25.405 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:25.405 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:15:25.405 00:15:25.405 --- 10.0.0.4 ping statistics --- 00:15:25.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.405 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:25.405 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:25.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:15:25.405 00:15:25.405 --- 10.0.0.1 ping statistics --- 00:15:25.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.405 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:25.405 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:25.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:15:25.405 00:15:25.405 --- 10.0.0.2 ping statistics --- 00:15:25.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.405 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:25.405 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.405 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:15:25.405 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:25.405 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.405 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:25.405 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:25.406 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.406 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:25.406 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:25.406 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:25.406 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:25.406 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:25.406 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:25.406 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75309 00:15:25.406 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:25.406 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75309 00:15:25.406 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # '[' -z 75309 ']' 00:15:25.406 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.406 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@843 -- # local max_retries=100 00:15:25.406 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.406 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@847 -- # xtrace_disable 00:15:25.406 08:28:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:25.406 [2024-11-20 08:28:12.831152] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:15:25.406 [2024-11-20 08:28:12.831249] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.665 [2024-11-20 08:28:12.979496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:25.665 [2024-11-20 08:28:13.062895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.665 [2024-11-20 08:28:13.063162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.665 [2024-11-20 08:28:13.063276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.665 [2024-11-20 08:28:13.063291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.665 [2024-11-20 08:28:13.063299] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.665 [2024-11-20 08:28:13.064729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.665 [2024-11-20 08:28:13.064839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.665 [2024-11-20 08:28:13.064839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:25.665 [2024-11-20 08:28:13.144756] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:26.601 08:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:15:26.602 08:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@871 -- # return 0 00:15:26.602 08:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:26.602 08:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@735 -- # xtrace_disable 00:15:26.602 08:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:26.602 08:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.602 08:28:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:26.602 [2024-11-20 08:28:14.122592] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:26.602 08:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:27.169 Malloc0 00:15:27.169 08:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:27.428 08:28:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:27.686 08:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:27.945 [2024-11-20 08:28:15.326081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:27.945 08:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:28.205 [2024-11-20 08:28:15.586391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:28.205 08:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:28.464 [2024-11-20 08:28:15.842726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:28.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:28.464 08:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75372 00:15:28.464 08:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:28.464 08:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:28.464 08:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75372 /var/tmp/bdevperf.sock 00:15:28.464 08:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # '[' -z 75372 ']' 00:15:28.464 08:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:28.464 08:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@843 -- # local max_retries=100 00:15:28.464 08:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:28.464 08:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@847 -- # xtrace_disable 00:15:28.464 08:28:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:29.400 08:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:15:29.400 08:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@871 -- # return 0 00:15:29.400 08:28:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:29.968 NVMe0n1 00:15:29.968 08:28:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:30.228 00:15:30.228 08:28:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:30.228 08:28:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75396 00:15:30.228 08:28:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:31.163 08:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:31.422 08:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:34.708 08:28:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:34.708 00:15:34.708 08:28:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:35.273 08:28:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:38.604 08:28:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:38.604 [2024-11-20 08:28:25.871786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:38.604 08:28:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:39.537 08:28:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:39.796 08:28:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75396 00:15:46.369 { 00:15:46.369 "results": [ 00:15:46.369 { 00:15:46.369 "job": "NVMe0n1", 00:15:46.369 "core_mask": "0x1", 00:15:46.369 "workload": "verify", 00:15:46.369 "status": "finished", 00:15:46.369 "verify_range": { 00:15:46.369 "start": 0, 00:15:46.369 "length": 16384 00:15:46.369 }, 00:15:46.369 "queue_depth": 128, 00:15:46.369 "io_size": 4096, 00:15:46.369 "runtime": 15.009191, 00:15:46.369 "iops": 8544.897589750173, 00:15:46.369 "mibps": 33.378506209961614, 00:15:46.369 "io_failed": 3557, 00:15:46.369 "io_timeout": 0, 00:15:46.369 "avg_latency_us": 14543.415167428904, 00:15:46.369 "min_latency_us": 577.1636363636363, 00:15:46.369 "max_latency_us": 16324.421818181818 00:15:46.369 } 00:15:46.369 ], 00:15:46.369 "core_count": 1 00:15:46.369 } 00:15:46.369 08:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75372 00:15:46.369 08:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' -z 75372 ']' 00:15:46.369 08:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@961 -- # kill -0 75372 00:15:46.369 08:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # uname 00:15:46.369 08:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:15:46.369 08:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 75372 00:15:46.369 killing process with pid 75372 00:15:46.369 08:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:15:46.369 08:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:15:46.369 08:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@975 -- # echo 'killing process with pid 75372' 00:15:46.369 08:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # kill 75372 00:15:46.369 08:28:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@981 -- # wait 75372 00:15:46.369 08:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:46.369 [2024-11-20 08:28:15.918741] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:15:46.369 [2024-11-20 08:28:15.918848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75372 ] 00:15:46.369 [2024-11-20 08:28:16.072842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.369 [2024-11-20 08:28:16.160978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.369 [2024-11-20 08:28:16.248913] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:46.369 Running I/O for 15 seconds... 00:15:46.369 8576.00 IOPS, 33.50 MiB/s [2024-11-20T08:28:33.930Z] [2024-11-20 08:28:18.861209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-11-20 08:28:18.861304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.861344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-11-20 08:28:18.861360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.861376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-11-20 08:28:18.861391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.861406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-11-20 08:28:18.861421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.861436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-11-20 08:28:18.861456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.861482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-11-20 08:28:18.861505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.861521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-11-20 08:28:18.861536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.861552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-11-20 08:28:18.861566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.861582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-11-20 08:28:18.861596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.861611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-11-20 08:28:18.861625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.861640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-11-20 08:28:18.861699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.861716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-11-20 08:28:18.861731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.861747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-11-20 08:28:18.861760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.861776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-11-20 08:28:18.861790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.861820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-11-20 08:28:18.861837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.861852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-11-20 08:28:18.861866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.861881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-11-20 08:28:18.861895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.861919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-11-20 08:28:18.861934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.861949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-11-20 08:28:18.861962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.861977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-11-20 08:28:18.861993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.862008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-11-20 08:28:18.862022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.862038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-11-20 08:28:18.862052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.862066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-11-20 08:28:18.862081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.862105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-11-20 08:28:18.862119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.369 [2024-11-20 08:28:18.862134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.862148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.862176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.862205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.862234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.862263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.862292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.862321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.862349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.862379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.862414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.862443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.862479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.862509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.862537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.370 [2024-11-20 08:28:18.862566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.370 [2024-11-20 08:28:18.862595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.370 [2024-11-20 08:28:18.862623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.370 [2024-11-20 08:28:18.862661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.370 [2024-11-20 08:28:18.862698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.370 [2024-11-20 08:28:18.862727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.370 [2024-11-20 08:28:18.862757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.370 [2024-11-20 08:28:18.862785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.862827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.862858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.862910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.862948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.862978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.862993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.863006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.863022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.863036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.863051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.863065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.863081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.863096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.863111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.863125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.863140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.863154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.863170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.863184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.863199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-11-20 08:28:18.863228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.370 [2024-11-20 08:28:18.863243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.371 [2024-11-20 08:28:18.863257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.863272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.371 [2024-11-20 08:28:18.863297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.863324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.371 [2024-11-20 08:28:18.863341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.863356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.371 [2024-11-20 08:28:18.863370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.863388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.371 [2024-11-20 08:28:18.863402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.863418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.371 [2024-11-20 08:28:18.863432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.863453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.371 [2024-11-20 08:28:18.863467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.863482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.371 [2024-11-20 08:28:18.863496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.863510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.371 [2024-11-20 08:28:18.863525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.863567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.371 [2024-11-20 08:28:18.863600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.863617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.371 [2024-11-20 08:28:18.863633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.863650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.371 [2024-11-20 08:28:18.863665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.863690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.371 [2024-11-20 08:28:18.863706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.863723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.371 [2024-11-20 08:28:18.863739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.863755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.371 [2024-11-20 08:28:18.863779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.863797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.371 [2024-11-20 08:28:18.863814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.863841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.371 [2024-11-20 08:28:18.863861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.863908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.371 [2024-11-20 08:28:18.863923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.863938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.371 [2024-11-20 08:28:18.863967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.863983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.371 [2024-11-20 08:28:18.863997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.864012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.371 [2024-11-20 08:28:18.864026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.864041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.371 [2024-11-20 08:28:18.864055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.864076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.371 [2024-11-20 08:28:18.864090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.864105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.371 [2024-11-20 08:28:18.864119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.864134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.371 [2024-11-20 08:28:18.864149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.864164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.371 [2024-11-20 08:28:18.864177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.864193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.371 [2024-11-20 08:28:18.864207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.864230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.371 [2024-11-20 08:28:18.864261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.864277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.371 [2024-11-20 08:28:18.864291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.864307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.371 [2024-11-20 08:28:18.864327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.864342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.371 [2024-11-20 08:28:18.864356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.864372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.371 [2024-11-20 08:28:18.864387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.864403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.371 [2024-11-20 08:28:18.864417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.864433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.371 [2024-11-20 08:28:18.864447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.864464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.371 [2024-11-20 08:28:18.864478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.864494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.371 [2024-11-20 08:28:18.864508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.864523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.371 [2024-11-20 08:28:18.864538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.864553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.371 [2024-11-20 08:28:18.864567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.864588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.371 [2024-11-20 08:28:18.864603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.864619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.371 [2024-11-20 08:28:18.864633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.371 [2024-11-20 08:28:18.864669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.372 [2024-11-20 08:28:18.864691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.864706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.372 [2024-11-20 08:28:18.864720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.864736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.372 [2024-11-20 08:28:18.864750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.864764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.372 [2024-11-20 08:28:18.864778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.864804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.372 [2024-11-20 08:28:18.864817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.864833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.372 [2024-11-20 08:28:18.864848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.864863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.372 [2024-11-20 08:28:18.864887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.864904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.372 [2024-11-20 08:28:18.864919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.864934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.372 [2024-11-20 08:28:18.864948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.864962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.372 [2024-11-20 08:28:18.864976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.864991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.372 [2024-11-20 08:28:18.865005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.865020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.372 [2024-11-20 08:28:18.865035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.865050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.372 [2024-11-20 08:28:18.865072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.865087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.372 [2024-11-20 08:28:18.865101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.865122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.372 [2024-11-20 08:28:18.865143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.865159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.372 [2024-11-20 08:28:18.865172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.865187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.372 [2024-11-20 08:28:18.865201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.865216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.372 [2024-11-20 08:28:18.865229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.865244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.372 [2024-11-20 08:28:18.865258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.865273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.372 [2024-11-20 08:28:18.865286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.865302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.372 [2024-11-20 08:28:18.865315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.865331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.372 [2024-11-20 08:28:18.865345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.865360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.372 [2024-11-20 08:28:18.865374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.865388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.372 [2024-11-20 08:28:18.865402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.865417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.372 [2024-11-20 08:28:18.865431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.865453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.372 [2024-11-20 08:28:18.865468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.865482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.372 [2024-11-20 08:28:18.865496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.865511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.372 [2024-11-20 08:28:18.865525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.865575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.372 [2024-11-20 08:28:18.865591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.372 [2024-11-20 08:28:18.865602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75576 len:8 PRP1 0x0 PRP2 0x0 00:15:46.372 [2024-11-20 08:28:18.865615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.865720] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:46.372 [2024-11-20 08:28:18.865795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.372 [2024-11-20 08:28:18.865851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.865868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.372 [2024-11-20 08:28:18.865881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.865895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.372 [2024-11-20 08:28:18.865908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.372 [2024-11-20 08:28:18.865922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.372 [2024-11-20 08:28:18.865935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:18.865948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:15:46.373 [2024-11-20 08:28:18.869491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:46.373 [2024-11-20 08:28:18.869531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9710 (9): Bad file descriptor 00:15:46.373 [2024-11-20 08:28:18.900336] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:15:46.373 8459.00 IOPS, 33.04 MiB/s [2024-11-20T08:28:33.934Z] 8525.67 IOPS, 33.30 MiB/s [2024-11-20T08:28:33.934Z] 8563.00 IOPS, 33.45 MiB/s [2024-11-20T08:28:33.934Z] [2024-11-20 08:28:22.566998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.373 [2024-11-20 08:28:22.567063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.373 [2024-11-20 08:28:22.567154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.373 [2024-11-20 08:28:22.567187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.373 [2024-11-20 08:28:22.567222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.373 [2024-11-20 08:28:22.567252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.373 [2024-11-20 08:28:22.567281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.373 [2024-11-20 08:28:22.567327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.373 [2024-11-20 08:28:22.567358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.373 [2024-11-20 08:28:22.567389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.373 [2024-11-20 08:28:22.567419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.373 [2024-11-20 08:28:22.567449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.373 [2024-11-20 08:28:22.567480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.373 [2024-11-20 08:28:22.567510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.373 [2024-11-20 08:28:22.567574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.373 [2024-11-20 08:28:22.567619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.373 [2024-11-20 08:28:22.567651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.373 [2024-11-20 08:28:22.567684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.373 [2024-11-20 08:28:22.567719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.373 [2024-11-20 08:28:22.567751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.373 [2024-11-20 08:28:22.567783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.373 [2024-11-20 08:28:22.567814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.373 [2024-11-20 08:28:22.567865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.373 [2024-11-20 08:28:22.567929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.373 [2024-11-20 08:28:22.567964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.567997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.373 [2024-11-20 08:28:22.568012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.568028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.373 [2024-11-20 08:28:22.568043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.568058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.373 [2024-11-20 08:28:22.568073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.568104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.373 [2024-11-20 08:28:22.568120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.568136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.373 [2024-11-20 08:28:22.568150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.568167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.373 [2024-11-20 08:28:22.568182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.568198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.373 [2024-11-20 08:28:22.568225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.373 [2024-11-20 08:28:22.568241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.568256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.568272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.568287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.568303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.568318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.568334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.568349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.568365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.568380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.568396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.568411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.568426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.568441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.568466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.568480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.568496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.568517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.568534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.568548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.568564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.568578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.568594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.374 [2024-11-20 08:28:22.568609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.568625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.374 [2024-11-20 08:28:22.568639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.568655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.374 [2024-11-20 08:28:22.568669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.568685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.374 [2024-11-20 08:28:22.568699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.568714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.374 [2024-11-20 08:28:22.568728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.568744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.374 [2024-11-20 08:28:22.568759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.568775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.374 [2024-11-20 08:28:22.568789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.568806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.374 [2024-11-20 08:28:22.568820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.568844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.568861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.568878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.568893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.568916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.568932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.568948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.568962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.568978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.568992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.569008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.569023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.569039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.569054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.569069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.569084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.569099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.569114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.569129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.569144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.569159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.569174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.569190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.569204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.569220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.569234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.569250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.569266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.569282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.569303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.569320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.569335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.569351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.374 [2024-11-20 08:28:22.569365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.374 [2024-11-20 08:28:22.569381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.375 [2024-11-20 08:28:22.569395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.569411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.375 [2024-11-20 08:28:22.569425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.569441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.569455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.569471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.569485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.569501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.569515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.569537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.569551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.569567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.569582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.569597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.569612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.569628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.569642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.569658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.569672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.569688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.569708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.569724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.569738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.569756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.569770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.569786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.569809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.569828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.569843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.569859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.569877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.569893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.569908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.569923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.569937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.569953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.569968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.569983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.569999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.570015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.570029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.570045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.570059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.570075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.570089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.570113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.570128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.570146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.570160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.570176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.570191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.570207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.375 [2024-11-20 08:28:22.570221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.570237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.375 [2024-11-20 08:28:22.570280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.570296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.375 [2024-11-20 08:28:22.570311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.570327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.375 [2024-11-20 08:28:22.570342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.570359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.375 [2024-11-20 08:28:22.570374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.570391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.375 [2024-11-20 08:28:22.570406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.570422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.375 [2024-11-20 08:28:22.570437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.570453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.375 [2024-11-20 08:28:22.570468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.570484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.375 [2024-11-20 08:28:22.570499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.570515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.375 [2024-11-20 08:28:22.570536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.570553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.375 [2024-11-20 08:28:22.570568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.570585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.570599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.570616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.570630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.570646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.375 [2024-11-20 08:28:22.570676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.375 [2024-11-20 08:28:22.570692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.376 [2024-11-20 08:28:22.570706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.570722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.376 [2024-11-20 08:28:22.570736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.570752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.376 [2024-11-20 08:28:22.570766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.570782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.376 [2024-11-20 08:28:22.570796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.570812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.376 [2024-11-20 08:28:22.570826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.570868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.376 [2024-11-20 08:28:22.570886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.570903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.376 [2024-11-20 08:28:22.570925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.570943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.376 [2024-11-20 08:28:22.570958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.570982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.376 [2024-11-20 08:28:22.570997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.571014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.376 [2024-11-20 08:28:22.571029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.571046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.376 [2024-11-20 08:28:22.571060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.571077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.376 [2024-11-20 08:28:22.571091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.571107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1558370 is same with the state(6) to be set 00:15:46.376 [2024-11-20 08:28:22.571125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.376 [2024-11-20 08:28:22.571137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.376 [2024-11-20 08:28:22.571148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82056 len:8 PRP1 0x0 PRP2 0x0 00:15:46.376 [2024-11-20 08:28:22.571162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.571193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.376 [2024-11-20 08:28:22.571204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.376 [2024-11-20 08:28:22.571215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82576 len:8 PRP1 0x0 PRP2 0x0 00:15:46.376 [2024-11-20 08:28:22.571229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.571242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.376 [2024-11-20 08:28:22.571253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.376 [2024-11-20 08:28:22.571264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82584 len:8 PRP1 0x0 PRP2 0x0 00:15:46.376 [2024-11-20 08:28:22.571277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.571291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.376 [2024-11-20 08:28:22.571302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.376 [2024-11-20 08:28:22.571312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82592 len:8 PRP1 0x0 PRP2 0x0 00:15:46.376 [2024-11-20 08:28:22.571326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.571340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.376 [2024-11-20 08:28:22.571351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.376 [2024-11-20 08:28:22.571361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82600 len:8 PRP1 0x0 PRP2 0x0 00:15:46.376 [2024-11-20 08:28:22.571375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.571401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.376 [2024-11-20 08:28:22.571413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.376 [2024-11-20 08:28:22.571423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82608 len:8 PRP1 0x0 PRP2 0x0 00:15:46.376 [2024-11-20 08:28:22.571437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.571451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.376 [2024-11-20 08:28:22.571463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.376 [2024-11-20 08:28:22.571473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82616 len:8 PRP1 0x0 PRP2 0x0 00:15:46.376 [2024-11-20 08:28:22.571487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.571501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.376 [2024-11-20 08:28:22.571512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.376 [2024-11-20 08:28:22.571523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82624 len:8 PRP1 0x0 PRP2 0x0 00:15:46.376 [2024-11-20 08:28:22.571544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.571588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.376 [2024-11-20 08:28:22.571599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.376 [2024-11-20 08:28:22.571611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82632 len:8 PRP1 0x0 PRP2 0x0 00:15:46.376 [2024-11-20 08:28:22.571628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.571700] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:15:46.376 [2024-11-20 08:28:22.571762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.376 [2024-11-20 08:28:22.571785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.571802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.376 [2024-11-20 08:28:22.571816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.571845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.376 [2024-11-20 08:28:22.571874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.571905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.376 [2024-11-20 08:28:22.571919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:22.571935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:15:46.376 [2024-11-20 08:28:22.571982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9710 (9): Bad file descriptor 00:15:46.376 [2024-11-20 08:28:22.575633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:15:46.376 [2024-11-20 08:28:22.604600] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:15:46.376 8632.40 IOPS, 33.72 MiB/s [2024-11-20T08:28:33.937Z] 8591.00 IOPS, 33.56 MiB/s [2024-11-20T08:28:33.937Z] 8655.14 IOPS, 33.81 MiB/s [2024-11-20T08:28:33.937Z] 8688.25 IOPS, 33.94 MiB/s [2024-11-20T08:28:33.937Z] 8680.22 IOPS, 33.91 MiB/s [2024-11-20T08:28:33.937Z] [2024-11-20 08:28:27.235304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.376 [2024-11-20 08:28:27.235374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:27.235404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.376 [2024-11-20 08:28:27.235421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:27.235439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.376 [2024-11-20 08:28:27.235456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:27.235473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.376 [2024-11-20 08:28:27.235488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.376 [2024-11-20 08:28:27.235505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.376 [2024-11-20 08:28:27.235521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.235549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.377 [2024-11-20 08:28:27.235576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.235594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.377 [2024-11-20 08:28:27.235610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.235627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.377 [2024-11-20 08:28:27.235643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.235660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.377 [2024-11-20 08:28:27.235676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.235693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.377 [2024-11-20 08:28:27.235709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.235726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.377 [2024-11-20 08:28:27.235742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.235759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.377 [2024-11-20 08:28:27.235775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.235834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.377 [2024-11-20 08:28:27.235851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.235874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.377 [2024-11-20 08:28:27.235898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.235915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.377 [2024-11-20 08:28:27.235930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.235947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.377 [2024-11-20 08:28:27.235962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.235979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.377 [2024-11-20 08:28:27.235994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.377 [2024-11-20 08:28:27.236033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.377 [2024-11-20 08:28:27.236065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.377 [2024-11-20 08:28:27.236097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.377 [2024-11-20 08:28:27.236132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.377 [2024-11-20 08:28:27.236164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.377 [2024-11-20 08:28:27.236196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.377 [2024-11-20 08:28:27.236228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.377 [2024-11-20 08:28:27.236289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.377 [2024-11-20 08:28:27.236323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.377 [2024-11-20 08:28:27.236354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.377 [2024-11-20 08:28:27.236387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.377 [2024-11-20 08:28:27.236418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.377 [2024-11-20 08:28:27.236450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.377 [2024-11-20 08:28:27.236482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.377 [2024-11-20 08:28:27.236514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.377 [2024-11-20 08:28:27.236546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.377 [2024-11-20 08:28:27.236580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.377 [2024-11-20 08:28:27.236619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.377 [2024-11-20 08:28:27.236651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.377 [2024-11-20 08:28:27.236683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.377 [2024-11-20 08:28:27.236734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.377 [2024-11-20 08:28:27.236768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.377 [2024-11-20 08:28:27.236811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.377 [2024-11-20 08:28:27.236846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.377 [2024-11-20 08:28:27.236879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.377 [2024-11-20 08:28:27.236911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.377 [2024-11-20 08:28:27.236944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.377 [2024-11-20 08:28:27.236961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.377 [2024-11-20 08:28:27.236976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.236992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.378 [2024-11-20 08:28:27.237008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.378 [2024-11-20 08:28:27.237040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.378 [2024-11-20 08:28:27.237071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.378 [2024-11-20 08:28:27.237103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.378 [2024-11-20 08:28:27.237136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.378 [2024-11-20 08:28:27.237177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.378 [2024-11-20 08:28:27.237210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.378 [2024-11-20 08:28:27.237253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.378 [2024-11-20 08:28:27.237296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.378 [2024-11-20 08:28:27.237328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.378 [2024-11-20 08:28:27.237360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.378 [2024-11-20 08:28:27.237392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.378 [2024-11-20 08:28:27.237424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.378 [2024-11-20 08:28:27.237456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.378 [2024-11-20 08:28:27.237488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.378 [2024-11-20 08:28:27.237520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.378 [2024-11-20 08:28:27.237553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.378 [2024-11-20 08:28:27.237600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.378 [2024-11-20 08:28:27.237633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.378 [2024-11-20 08:28:27.237665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.378 [2024-11-20 08:28:27.237697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.378 [2024-11-20 08:28:27.237739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.378 [2024-11-20 08:28:27.237771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.378 [2024-11-20 08:28:27.237813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.378 [2024-11-20 08:28:27.237847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.378 [2024-11-20 08:28:27.237881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.378 [2024-11-20 08:28:27.237913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.378 [2024-11-20 08:28:27.237945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.378 [2024-11-20 08:28:27.237978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.237994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.378 [2024-11-20 08:28:27.238009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.238034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.378 [2024-11-20 08:28:27.238051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.238067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.378 [2024-11-20 08:28:27.238083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.238100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.378 [2024-11-20 08:28:27.238115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.238132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.378 [2024-11-20 08:28:27.238147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.378 [2024-11-20 08:28:27.238164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.378 [2024-11-20 08:28:27.238179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.238212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.238254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.238286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.238318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.238350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.238382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.238414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.238445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.238485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.238517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.238549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.238581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.238613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.238646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.238678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.238709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.379 [2024-11-20 08:28:27.238742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.379 [2024-11-20 08:28:27.238774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.379 [2024-11-20 08:28:27.238817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.379 [2024-11-20 08:28:27.238850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.379 [2024-11-20 08:28:27.238890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.379 [2024-11-20 08:28:27.238924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.379 [2024-11-20 08:28:27.238963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.238980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.379 [2024-11-20 08:28:27.238995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.239011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.239027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.239043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.239059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.239075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.239090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.239107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.239122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.239139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.239154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.239171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.239186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.239203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.239218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.239234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.379 [2024-11-20 08:28:27.239255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.239272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.379 [2024-11-20 08:28:27.239287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.239311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.379 [2024-11-20 08:28:27.239327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.239344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.379 [2024-11-20 08:28:27.239359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.239375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.379 [2024-11-20 08:28:27.239391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.239408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.379 [2024-11-20 08:28:27.239423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.239439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.379 [2024-11-20 08:28:27.239454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.239471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.379 [2024-11-20 08:28:27.239486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.379 [2024-11-20 08:28:27.239502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1540a50 is same with the state(6) to be set 00:15:46.379 [2024-11-20 08:28:27.239528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.379 [2024-11-20 08:28:27.239550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.380 [2024-11-20 08:28:27.239562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24632 len:8 PRP1 0x0 PRP2 0x0 00:15:46.380 [2024-11-20 08:28:27.239577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.380 [2024-11-20 08:28:27.239593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.380 [2024-11-20 08:28:27.239605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.380 [2024-11-20 08:28:27.239616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:8 PRP1 0x0 PRP2 0x0 00:15:46.380 [2024-11-20 08:28:27.239631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.380 [2024-11-20 08:28:27.239646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.380 [2024-11-20 08:28:27.239658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.380 [2024-11-20 08:28:27.239669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25160 len:8 PRP1 0x0 PRP2 0x0 00:15:46.380 [2024-11-20 08:28:27.239684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.380 [2024-11-20 08:28:27.239706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.380 [2024-11-20 08:28:27.239717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.380 [2024-11-20 08:28:27.239729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25168 len:8 PRP1 0x0 PRP2 0x0 00:15:46.380 [2024-11-20 08:28:27.239751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.380 [2024-11-20 08:28:27.239766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.380 [2024-11-20 08:28:27.239777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.380 [2024-11-20 08:28:27.239788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25176 len:8 PRP1 0x0 PRP2 0x0 00:15:46.380 [2024-11-20 08:28:27.239815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.380 [2024-11-20 08:28:27.239838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.380 [2024-11-20 08:28:27.239849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.380 [2024-11-20 08:28:27.239861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:8 PRP1 0x0 PRP2 0x0 00:15:46.380 [2024-11-20 08:28:27.239876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.380 [2024-11-20 08:28:27.239890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.380 [2024-11-20 08:28:27.239901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.380 [2024-11-20 08:28:27.239912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25192 len:8 PRP1 0x0 PRP2 0x0 00:15:46.380 [2024-11-20 08:28:27.239927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.380 [2024-11-20 08:28:27.239941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.380 [2024-11-20 08:28:27.239952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.380 [2024-11-20 08:28:27.239963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25200 len:8 PRP1 0x0 PRP2 0x0 00:15:46.380 [2024-11-20 08:28:27.239978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.380 [2024-11-20 08:28:27.239993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.380 [2024-11-20 08:28:27.240004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.380 [2024-11-20 08:28:27.240015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25208 len:8 PRP1 0x0 PRP2 0x0 00:15:46.380 [2024-11-20 08:28:27.240031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.380 [2024-11-20 08:28:27.240094] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:15:46.380 [2024-11-20 08:28:27.240155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.380 [2024-11-20 08:28:27.240177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.380 [2024-11-20 08:28:27.240194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.380 [2024-11-20 08:28:27.240208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.380 [2024-11-20 08:28:27.240223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.380 [2024-11-20 08:28:27.240251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.380 [2024-11-20 08:28:27.240268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.380 [2024-11-20 08:28:27.240298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.380 [2024-11-20 08:28:27.240314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:15:46.380 [2024-11-20 08:28:27.240369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9710 (9): Bad file descriptor 00:15:46.380 [2024-11-20 08:28:27.244189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:15:46.380 [2024-11-20 08:28:27.266490] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:15:46.380 8630.20 IOPS, 33.71 MiB/s [2024-11-20T08:28:33.941Z] 8585.55 IOPS, 33.54 MiB/s [2024-11-20T08:28:33.941Z] 8552.75 IOPS, 33.41 MiB/s [2024-11-20T08:28:33.941Z] 8525.00 IOPS, 33.30 MiB/s [2024-11-20T08:28:33.941Z] 8513.00 IOPS, 33.25 MiB/s [2024-11-20T08:28:33.941Z] 8543.53 IOPS, 33.37 MiB/s 00:15:46.380 Latency(us) 00:15:46.380 [2024-11-20T08:28:33.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.380 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:46.380 Verification LBA range: start 0x0 length 0x4000 00:15:46.380 NVMe0n1 : 15.01 8544.90 33.38 236.99 0.00 14543.42 577.16 16324.42 00:15:46.380 [2024-11-20T08:28:33.941Z] =================================================================================================================== 00:15:46.380 [2024-11-20T08:28:33.941Z] Total : 8544.90 33.38 236.99 0.00 14543.42 577.16 16324.42 00:15:46.380 Received shutdown signal, test time was about 15.000000 seconds 00:15:46.380 00:15:46.380 Latency(us) 00:15:46.380 [2024-11-20T08:28:33.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.380 [2024-11-20T08:28:33.941Z] =================================================================================================================== 00:15:46.380 [2024-11-20T08:28:33.941Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:46.380 08:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:46.380 08:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:46.380 08:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:46.380 08:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75575 00:15:46.380 08:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:46.380 08:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75575 /var/tmp/bdevperf.sock 00:15:46.380 08:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # '[' -z 75575 ']' 00:15:46.380 08:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:46.380 08:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@843 -- # local max_retries=100 00:15:46.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:46.380 08:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:46.380 08:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@847 -- # xtrace_disable 00:15:46.380 08:28:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:46.639 08:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:15:46.639 08:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@871 -- # return 0 00:15:46.639 08:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:47.207 [2024-11-20 08:28:34.505715] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:47.207 08:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:47.466 [2024-11-20 08:28:34.790091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:47.466 08:28:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:47.725 NVMe0n1 00:15:47.725 08:28:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:47.983 00:15:47.983 08:28:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:48.550 00:15:48.550 08:28:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:48.550 08:28:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:48.810 08:28:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:49.069 08:28:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:52.354 08:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:52.354 08:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:52.354 08:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75658 00:15:52.354 08:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:52.354 08:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75658 00:15:53.731 { 00:15:53.731 "results": [ 00:15:53.731 { 00:15:53.731 "job": "NVMe0n1", 00:15:53.731 "core_mask": "0x1", 00:15:53.731 "workload": "verify", 00:15:53.731 "status": "finished", 00:15:53.731 "verify_range": { 00:15:53.731 "start": 0, 00:15:53.731 "length": 16384 00:15:53.731 }, 00:15:53.731 "queue_depth": 128, 00:15:53.731 "io_size": 4096, 00:15:53.731 "runtime": 1.00969, 00:15:53.731 "iops": 7328.9821628420605, 00:15:53.731 "mibps": 28.6288365736018, 00:15:53.731 "io_failed": 0, 00:15:53.731 "io_timeout": 0, 00:15:53.731 "avg_latency_us": 17355.01064176904, 00:15:53.731 "min_latency_us": 1072.4072727272728, 00:15:53.731 "max_latency_us": 16205.265454545455 00:15:53.731 } 00:15:53.731 ], 00:15:53.731 "core_count": 1 00:15:53.731 } 00:15:53.731 08:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:53.731 [2024-11-20 08:28:33.169069] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:15:53.731 [2024-11-20 08:28:33.169240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75575 ] 00:15:53.731 [2024-11-20 08:28:33.319720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.731 [2024-11-20 08:28:33.387420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.731 [2024-11-20 08:28:33.459097] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:53.731 [2024-11-20 08:28:36.394372] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:53.732 [2024-11-20 08:28:36.394527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.732 [2024-11-20 08:28:36.394554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.732 [2024-11-20 08:28:36.394573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.732 [2024-11-20 08:28:36.394588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.732 [2024-11-20 08:28:36.394602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.732 [2024-11-20 08:28:36.394616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.732 [2024-11-20 08:28:36.394632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.732 [2024-11-20 08:28:36.394646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.732 [2024-11-20 08:28:36.394660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:15:53.732 [2024-11-20 08:28:36.394719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:15:53.732 [2024-11-20 08:28:36.394755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb22710 (9): Bad file descriptor 00:15:53.732 [2024-11-20 08:28:36.401790] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:15:53.732 Running I/O for 1 seconds... 00:15:53.732 7264.00 IOPS, 28.38 MiB/s 00:15:53.732 Latency(us) 00:15:53.732 [2024-11-20T08:28:41.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.732 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:53.732 Verification LBA range: start 0x0 length 0x4000 00:15:53.732 NVMe0n1 : 1.01 7328.98 28.63 0.00 0.00 17355.01 1072.41 16205.27 00:15:53.732 [2024-11-20T08:28:41.293Z] =================================================================================================================== 00:15:53.732 [2024-11-20T08:28:41.293Z] Total : 7328.98 28.63 0.00 0.00 17355.01 1072.41 16205.27 00:15:53.732 08:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:53.732 08:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:53.732 08:28:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:53.990 08:28:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:53.990 08:28:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:54.557 08:28:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:54.816 08:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:58.103 08:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:58.103 08:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:58.103 08:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75575 00:15:58.103 08:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' -z 75575 ']' 00:15:58.103 08:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@961 -- # kill -0 75575 00:15:58.103 08:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # uname 00:15:58.103 08:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:15:58.103 08:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 75575 00:15:58.103 08:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:15:58.103 08:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:15:58.103 killing process with pid 75575 00:15:58.103 08:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@975 -- # echo 'killing process with pid 75575' 00:15:58.103 08:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # kill 75575 00:15:58.103 08:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@981 -- # wait 75575 00:15:58.362 08:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:58.362 08:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:58.620 08:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:58.620 08:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:58.620 08:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:58.621 08:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:58.621 08:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:15:58.621 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:58.621 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:15:58.621 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:58.621 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:58.621 rmmod nvme_tcp 00:15:58.621 rmmod nvme_fabrics 00:15:58.621 rmmod nvme_keyring 00:15:58.621 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:58.621 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:15:58.621 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:15:58.621 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75309 ']' 00:15:58.621 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75309 00:15:58.621 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' -z 75309 ']' 00:15:58.621 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@961 -- # kill -0 75309 00:15:58.621 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # uname 00:15:58.621 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:15:58.621 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 75309 00:15:58.621 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:15:58.621 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:15:58.621 killing process with pid 75309 00:15:58.621 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@975 -- # echo 'killing process with pid 75309' 00:15:58.621 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # kill 75309 00:15:58.621 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@981 -- # wait 75309 00:15:58.879 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:58.879 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:58.879 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:58.879 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:15:58.879 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:15:58.879 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:58.879 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:15:58.879 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:58.879 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:58.879 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:58.879 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:58.879 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:59.139 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:59.139 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:59.139 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:59.139 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:59.139 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:59.139 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:59.139 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:59.139 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:59.139 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:59.139 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:59.139 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:59.139 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.139 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:59.139 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.139 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:15:59.139 00:15:59.139 real 0m34.524s 00:15:59.139 user 2m13.436s 00:15:59.139 sys 0m5.777s 00:15:59.139 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1133 -- # xtrace_disable 00:15:59.139 08:28:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:59.139 ************************************ 00:15:59.139 END TEST nvmf_failover 00:15:59.139 ************************************ 00:15:59.139 08:28:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:59.139 08:28:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:15:59.139 08:28:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1114 -- # xtrace_disable 00:15:59.139 08:28:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.139 ************************************ 00:15:59.139 START TEST nvmf_host_discovery 00:15:59.139 ************************************ 00:15:59.139 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:59.399 * Looking for test storage... 00:15:59.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1638 -- # lcov --version 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:15:59.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.399 --rc genhtml_branch_coverage=1 00:15:59.399 --rc genhtml_function_coverage=1 00:15:59.399 --rc genhtml_legend=1 00:15:59.399 --rc geninfo_all_blocks=1 00:15:59.399 --rc geninfo_unexecuted_blocks=1 00:15:59.399 00:15:59.399 ' 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:15:59.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.399 --rc genhtml_branch_coverage=1 00:15:59.399 --rc genhtml_function_coverage=1 00:15:59.399 --rc genhtml_legend=1 00:15:59.399 --rc geninfo_all_blocks=1 00:15:59.399 --rc geninfo_unexecuted_blocks=1 00:15:59.399 00:15:59.399 ' 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:15:59.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.399 --rc genhtml_branch_coverage=1 00:15:59.399 --rc genhtml_function_coverage=1 00:15:59.399 --rc genhtml_legend=1 00:15:59.399 --rc geninfo_all_blocks=1 00:15:59.399 --rc geninfo_unexecuted_blocks=1 00:15:59.399 00:15:59.399 ' 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:15:59.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.399 --rc genhtml_branch_coverage=1 00:15:59.399 --rc genhtml_function_coverage=1 00:15:59.399 --rc genhtml_legend=1 00:15:59.399 --rc geninfo_all_blocks=1 00:15:59.399 --rc geninfo_unexecuted_blocks=1 00:15:59.399 00:15:59.399 ' 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.399 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:59.400 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:59.400 Cannot find device "nvmf_init_br" 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:59.400 Cannot find device "nvmf_init_br2" 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:59.400 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:59.660 Cannot find device "nvmf_tgt_br" 00:15:59.660 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:15:59.660 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:59.660 Cannot find device "nvmf_tgt_br2" 00:15:59.660 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:15:59.660 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:59.660 Cannot find device "nvmf_init_br" 00:15:59.660 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:15:59.660 08:28:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:59.660 Cannot find device "nvmf_init_br2" 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:59.660 Cannot find device "nvmf_tgt_br" 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:59.660 Cannot find device "nvmf_tgt_br2" 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:59.660 Cannot find device "nvmf_br" 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:59.660 Cannot find device "nvmf_init_if" 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:59.660 Cannot find device "nvmf_init_if2" 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:59.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:59.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:59.660 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:59.919 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:59.919 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:59.919 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:59.919 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:59.919 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:59.919 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:59.919 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:59.919 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:59.919 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:59.919 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:59.919 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:59.919 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:59.919 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:59.919 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:59.919 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:59.919 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:59.919 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:15:59.919 00:15:59.919 --- 10.0.0.3 ping statistics --- 00:15:59.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.919 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:59.919 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:59.919 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:59.919 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:15:59.919 00:15:59.919 --- 10.0.0.4 ping statistics --- 00:15:59.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.919 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:15:59.919 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:59.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:59.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:15:59.919 00:15:59.919 --- 10.0.0.1 ping statistics --- 00:15:59.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.920 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:59.920 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:59.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:59.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:15:59.920 00:15:59.920 --- 10.0.0.2 ping statistics --- 00:15:59.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.920 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:59.920 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:59.920 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:15:59.920 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:59.920 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:59.920 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:59.920 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:59.920 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:59.920 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:59.920 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:59.920 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:59.920 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:59.920 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:59.920 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.920 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75985 00:15:59.920 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75985 00:15:59.920 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # '[' -z 75985 ']' 00:15:59.920 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:59.920 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.920 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@843 -- # local max_retries=100 00:15:59.920 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.920 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@847 -- # xtrace_disable 00:15:59.920 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.920 [2024-11-20 08:28:47.419728] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:15:59.920 [2024-11-20 08:28:47.419835] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.178 [2024-11-20 08:28:47.568472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.178 [2024-11-20 08:28:47.643001] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.178 [2024-11-20 08:28:47.643070] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.178 [2024-11-20 08:28:47.643081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.178 [2024-11-20 08:28:47.643091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.178 [2024-11-20 08:28:47.643098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.178 [2024-11-20 08:28:47.643603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.178 [2024-11-20 08:28:47.718380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@871 -- # return 0 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@735 -- # xtrace_disable 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.437 [2024-11-20 08:28:47.848270] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.437 [2024-11-20 08:28:47.856459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.437 null0 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.437 null1 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76010 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76010 /tmp/host.sock 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # '[' -z 76010 ']' 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # local rpc_addr=/tmp/host.sock 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@843 -- # local max_retries=100 00:16:00.437 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@847 -- # xtrace_disable 00:16:00.437 08:28:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.437 [2024-11-20 08:28:47.948618] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:16:00.437 [2024-11-20 08:28:47.948734] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76010 ] 00:16:00.695 [2024-11-20 08:28:48.103921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.695 [2024-11-20 08:28:48.169537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.695 [2024-11-20 08:28:48.223671] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@871 -- # return 0 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:00.954 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:00.955 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.955 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:00.955 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:00.955 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:00.955 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:00.955 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:00.955 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.955 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:00.955 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:00.955 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.214 [2024-11-20 08:28:48.680748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:01.214 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # local max=10 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@923 -- # (( max-- )) 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # get_notification_count 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # (( notification_count == expected_count )) 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@925 -- # return 0 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # local max=10 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@923 -- # (( max-- )) 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # get_subsystem_names 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # [[ '' == \n\v\m\e\0 ]] 00:16:01.473 08:28:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@927 -- # sleep 1 00:16:02.042 [2024-11-20 08:28:49.311574] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:02.042 [2024-11-20 08:28:49.311606] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:02.042 [2024-11-20 08:28:49.311630] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:02.042 [2024-11-20 08:28:49.317606] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:02.042 [2024-11-20 08:28:49.372007] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:16:02.042 [2024-11-20 08:28:49.373023] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xff5e60:1 started. 00:16:02.042 [2024-11-20 08:28:49.374880] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:02.042 [2024-11-20 08:28:49.374903] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:02.042 [2024-11-20 08:28:49.379921] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xff5e60 was disconnected and freed. delete nvme_qpair. 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@923 -- # (( max-- )) 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # get_subsystem_names 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@925 -- # return 0 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # local max=10 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@923 -- # (( max-- )) 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # get_bdev_list 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.610 08:28:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@925 -- # return 0 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # local max=10 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@923 -- # (( max-- )) 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # get_subsystem_paths nvme0 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # [[ 4420 == \4\4\2\0 ]] 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@925 -- # return 0 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # local max=10 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@923 -- # (( max-- )) 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # get_notification_count 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # (( notification_count == expected_count )) 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@925 -- # return 0 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:02.610 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.611 [2024-11-20 08:28:50.134054] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1004000:1 started. 00:16:02.611 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:02.611 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:02.611 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:02.611 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # local max=10 00:16:02.611 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@923 -- # (( max-- )) 00:16:02.611 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:02.611 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # get_bdev_list 00:16:02.611 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:02.611 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:02.611 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:02.611 [2024-11-20 08:28:50.141032] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1004000 was disconnected and freed. delete nvme_qpair. 00:16:02.611 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:02.611 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:02.611 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@925 -- # return 0 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # local max=10 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@923 -- # (( max-- )) 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # get_notification_count 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # (( notification_count == expected_count )) 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@925 -- # return 0 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.870 [2024-11-20 08:28:50.270673] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:02.870 [2024-11-20 08:28:50.271419] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:16:02.870 [2024-11-20 08:28:50.271461] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # local max=10 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@923 -- # (( max-- )) 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:02.870 [2024-11-20 08:28:50.277404] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # get_subsystem_names 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@925 -- # return 0 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # local max=10 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@923 -- # (( max-- )) 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # get_bdev_list 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:02.870 [2024-11-20 08:28:50.335856] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:16:02.870 [2024-11-20 08:28:50.335914] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:02.870 [2024-11-20 08:28:50.335926] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:02.870 [2024-11-20 08:28:50.335933] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@925 -- # return 0 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # local max=10 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@923 -- # (( max-- )) 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # get_subsystem_paths nvme0 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.870 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@925 -- # return 0 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # local max=10 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@923 -- # (( max-- )) 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # get_notification_count 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # (( notification_count == expected_count )) 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@925 -- # return 0 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.130 [2024-11-20 08:28:50.487685] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:16:03.130 [2024-11-20 08:28:50.487720] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:03.130 [2024-11-20 08:28:50.492169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:03.130 [2024-11-20 08:28:50.492203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.130 [2024-11-20 08:28:50.492216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:03.130 [2024-11-20 08:28:50.492226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # local max=10 00:16:03.130 [2024-11-20 08:28:50.492236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:03.130 [2024-11-20 08:28:50.492246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.130 [2024-11-20 08:28:50.492256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:03.130 [2024-11-20 08:28:50.492265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.130 [2024-11-20 08:28:50.492274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd2230 is same with the state(6) to be set 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@923 -- # (( max-- )) 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:03.130 [2024-11-20 08:28:50.493679] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:16:03.130 [2024-11-20 08:28:50.493707] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:03.130 [2024-11-20 08:28:50.493767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd2230 (9): Bad file descriptor 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # get_subsystem_names 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.130 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@925 -- # return 0 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # local max=10 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@923 -- # (( max-- )) 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # get_bdev_list 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@925 -- # return 0 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # local max=10 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@923 -- # (( max-- )) 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # get_subsystem_paths nvme0 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # [[ 4421 == \4\4\2\1 ]] 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@925 -- # return 0 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # local max=10 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@923 -- # (( max-- )) 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # get_notification_count 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:03.131 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.392 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:03.392 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:03.392 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:03.392 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # (( notification_count == expected_count )) 00:16:03.392 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@925 -- # return 0 00:16:03.392 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:03.392 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:03.392 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.392 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:03.392 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:03.392 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # local max=10 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@923 -- # (( max-- )) 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # get_subsystem_names 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # [[ '' == '' ]] 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@925 -- # return 0 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # local max=10 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@923 -- # (( max-- )) 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # get_bdev_list 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # [[ '' == '' ]] 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@925 -- # return 0 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # local max=10 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@923 -- # (( max-- )) 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # get_notification_count 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # (( notification_count == expected_count )) 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@925 -- # return 0 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:03.393 08:28:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.774 [2024-11-20 08:28:51.943323] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:04.774 [2024-11-20 08:28:51.943523] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:04.774 [2024-11-20 08:28:51.943631] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:04.774 [2024-11-20 08:28:51.949358] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:16:04.774 [2024-11-20 08:28:52.007856] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:16:04.774 [2024-11-20 08:28:52.008932] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xfcac40:1 started. 00:16:04.774 [2024-11-20 08:28:52.011175] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:04.774 [2024-11-20 08:28:52.011377] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:04.774 [2024-11-20 08:28:52.013020] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xfcac40 was disconnected and freed. delete nvme_qpair. 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # local es=0 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@657 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@643 -- # local arg=rpc_cmd 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@647 -- # type -t rpc_cmd 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@658 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.774 request: 00:16:04.774 { 00:16:04.774 "name": "nvme", 00:16:04.774 "trtype": "tcp", 00:16:04.774 "traddr": "10.0.0.3", 00:16:04.774 "adrfam": "ipv4", 00:16:04.774 "trsvcid": "8009", 00:16:04.774 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:04.774 "wait_for_attach": true, 00:16:04.774 "method": "bdev_nvme_start_discovery", 00:16:04.774 "req_id": 1 00:16:04.774 } 00:16:04.774 Got JSON-RPC error response 00:16:04.774 response: 00:16:04.774 { 00:16:04.774 "code": -17, 00:16:04.774 "message": "File exists" 00:16:04.774 } 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 1 == 0 ]] 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@658 -- # es=1 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:04.774 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # local es=0 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@657 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@643 -- # local arg=rpc_cmd 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@647 -- # type -t rpc_cmd 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@658 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.775 request: 00:16:04.775 { 00:16:04.775 "name": "nvme_second", 00:16:04.775 "trtype": "tcp", 00:16:04.775 "traddr": "10.0.0.3", 00:16:04.775 "adrfam": "ipv4", 00:16:04.775 "trsvcid": "8009", 00:16:04.775 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:04.775 "wait_for_attach": true, 00:16:04.775 "method": "bdev_nvme_start_discovery", 00:16:04.775 "req_id": 1 00:16:04.775 } 00:16:04.775 Got JSON-RPC error response 00:16:04.775 response: 00:16:04.775 { 00:16:04.775 "code": -17, 00:16:04.775 "message": "File exists" 00:16:04.775 } 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 1 == 0 ]] 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@658 -- # es=1 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # local es=0 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@657 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@643 -- # local arg=rpc_cmd 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@647 -- # type -t rpc_cmd 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@658 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:04.775 08:28:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.712 [2024-11-20 08:28:53.259753] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:05.712 [2024-11-20 08:28:53.259853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfcc900 with addr=10.0.0.3, port=8010 00:16:05.712 [2024-11-20 08:28:53.259889] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:05.712 [2024-11-20 08:28:53.259900] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:05.712 [2024-11-20 08:28:53.259910] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:07.089 [2024-11-20 08:28:54.259731] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:07.089 [2024-11-20 08:28:54.259800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfcc900 with addr=10.0.0.3, port=8010 00:16:07.090 [2024-11-20 08:28:54.259843] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:07.090 [2024-11-20 08:28:54.259854] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:07.090 [2024-11-20 08:28:54.259864] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:08.026 [2024-11-20 08:28:55.259591] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:16:08.026 request: 00:16:08.026 { 00:16:08.026 "name": "nvme_second", 00:16:08.026 "trtype": "tcp", 00:16:08.026 "traddr": "10.0.0.3", 00:16:08.026 "adrfam": "ipv4", 00:16:08.026 "trsvcid": "8010", 00:16:08.026 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:08.026 "wait_for_attach": false, 00:16:08.026 "attach_timeout_ms": 3000, 00:16:08.026 "method": "bdev_nvme_start_discovery", 00:16:08.026 "req_id": 1 00:16:08.026 } 00:16:08.026 Got JSON-RPC error response 00:16:08.026 response: 00:16:08.026 { 00:16:08.026 "code": -110, 00:16:08.026 "message": "Connection timed out" 00:16:08.026 } 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 1 == 0 ]] 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@658 -- # es=1 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76010 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:08.026 rmmod nvme_tcp 00:16:08.026 rmmod nvme_fabrics 00:16:08.026 rmmod nvme_keyring 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75985 ']' 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75985 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' -z 75985 ']' 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@961 -- # kill -0 75985 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # uname 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 75985 00:16:08.026 killing process with pid 75985 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@975 -- # echo 'killing process with pid 75985' 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # kill 75985 00:16:08.026 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@981 -- # wait 75985 00:16:08.285 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:08.285 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:08.285 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:08.285 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:16:08.285 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:16:08.285 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:08.285 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:16:08.285 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:08.285 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:08.285 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:08.285 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:08.285 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:08.285 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:08.285 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:08.285 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:08.285 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:08.285 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:08.285 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:08.285 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:08.285 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:08.285 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:08.285 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:08.544 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:08.544 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.544 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:08.544 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.545 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:16:08.545 00:16:08.545 real 0m9.226s 00:16:08.545 user 0m17.271s 00:16:08.545 sys 0m2.054s 00:16:08.545 ************************************ 00:16:08.545 END TEST nvmf_host_discovery 00:16:08.545 ************************************ 00:16:08.545 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1133 -- # xtrace_disable 00:16:08.545 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.545 08:28:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:08.545 08:28:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:16:08.545 08:28:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1114 -- # xtrace_disable 00:16:08.545 08:28:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.545 ************************************ 00:16:08.545 START TEST nvmf_host_multipath_status 00:16:08.545 ************************************ 00:16:08.545 08:28:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:08.545 * Looking for test storage... 00:16:08.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:08.545 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:16:08.545 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1638 -- # lcov --version 00:16:08.545 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:16:08.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.805 --rc genhtml_branch_coverage=1 00:16:08.805 --rc genhtml_function_coverage=1 00:16:08.805 --rc genhtml_legend=1 00:16:08.805 --rc geninfo_all_blocks=1 00:16:08.805 --rc geninfo_unexecuted_blocks=1 00:16:08.805 00:16:08.805 ' 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:16:08.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.805 --rc genhtml_branch_coverage=1 00:16:08.805 --rc genhtml_function_coverage=1 00:16:08.805 --rc genhtml_legend=1 00:16:08.805 --rc geninfo_all_blocks=1 00:16:08.805 --rc geninfo_unexecuted_blocks=1 00:16:08.805 00:16:08.805 ' 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:16:08.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.805 --rc genhtml_branch_coverage=1 00:16:08.805 --rc genhtml_function_coverage=1 00:16:08.805 --rc genhtml_legend=1 00:16:08.805 --rc geninfo_all_blocks=1 00:16:08.805 --rc geninfo_unexecuted_blocks=1 00:16:08.805 00:16:08.805 ' 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:16:08.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.805 --rc genhtml_branch_coverage=1 00:16:08.805 --rc genhtml_function_coverage=1 00:16:08.805 --rc genhtml_legend=1 00:16:08.805 --rc geninfo_all_blocks=1 00:16:08.805 --rc geninfo_unexecuted_blocks=1 00:16:08.805 00:16:08.805 ' 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.805 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:08.806 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:08.806 Cannot find device "nvmf_init_br" 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:08.806 Cannot find device "nvmf_init_br2" 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:08.806 Cannot find device "nvmf_tgt_br" 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:08.806 Cannot find device "nvmf_tgt_br2" 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:08.806 Cannot find device "nvmf_init_br" 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:08.806 Cannot find device "nvmf_init_br2" 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:08.806 Cannot find device "nvmf_tgt_br" 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:16:08.806 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:08.807 Cannot find device "nvmf_tgt_br2" 00:16:08.807 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:16:08.807 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:08.807 Cannot find device "nvmf_br" 00:16:08.807 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:16:08.807 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:08.807 Cannot find device "nvmf_init_if" 00:16:08.807 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:16:08.807 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:08.807 Cannot find device "nvmf_init_if2" 00:16:08.807 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:16:08.807 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:08.807 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.807 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:16:08.807 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:08.807 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.807 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:16:08.807 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:09.066 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:09.066 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:16:09.066 00:16:09.066 --- 10.0.0.3 ping statistics --- 00:16:09.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.066 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:09.066 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:09.066 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:16:09.066 00:16:09.066 --- 10.0.0.4 ping statistics --- 00:16:09.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.066 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:16:09.066 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:09.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:09.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:09.066 00:16:09.066 --- 10.0.0.1 ping statistics --- 00:16:09.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.066 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:09.067 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:09.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:09.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:16:09.067 00:16:09.067 --- 10.0.0.2 ping statistics --- 00:16:09.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.067 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:09.067 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:09.067 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:16:09.067 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:09.067 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:09.067 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:09.067 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:09.067 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:09.067 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:09.067 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:09.067 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:09.067 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:09.067 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:09.067 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:09.067 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76517 00:16:09.067 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:09.067 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76517 00:16:09.067 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # '[' -z 76517 ']' 00:16:09.067 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.067 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@843 -- # local max_retries=100 00:16:09.067 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.067 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@847 -- # xtrace_disable 00:16:09.067 08:28:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:09.326 [2024-11-20 08:28:56.677545] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:16:09.326 [2024-11-20 08:28:56.677741] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.326 [2024-11-20 08:28:56.845988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:09.587 [2024-11-20 08:28:56.949218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.587 [2024-11-20 08:28:56.949696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.587 [2024-11-20 08:28:56.949929] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:09.587 [2024-11-20 08:28:56.950104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:09.587 [2024-11-20 08:28:56.950265] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.587 [2024-11-20 08:28:56.951969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.587 [2024-11-20 08:28:56.951996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.587 [2024-11-20 08:28:57.024027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:09.587 08:28:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:16:09.587 08:28:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@871 -- # return 0 00:16:09.587 08:28:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:09.587 08:28:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@735 -- # xtrace_disable 00:16:09.587 08:28:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:09.587 08:28:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.588 08:28:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76517 00:16:09.588 08:28:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:09.911 [2024-11-20 08:28:57.446686] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:10.169 08:28:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:10.428 Malloc0 00:16:10.428 08:28:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:10.687 08:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:11.256 08:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:11.256 [2024-11-20 08:28:58.800487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:11.515 08:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:11.515 [2024-11-20 08:28:59.064536] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:11.773 08:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76571 00:16:11.773 08:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:11.773 08:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:11.773 08:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76571 /var/tmp/bdevperf.sock 00:16:11.773 08:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # '[' -z 76571 ']' 00:16:11.773 08:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:11.773 08:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@843 -- # local max_retries=100 00:16:11.773 08:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:11.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:11.774 08:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@847 -- # xtrace_disable 00:16:11.774 08:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:12.032 08:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:16:12.032 08:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@871 -- # return 0 00:16:12.032 08:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:12.290 08:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:12.858 Nvme0n1 00:16:12.858 08:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:13.118 Nvme0n1 00:16:13.118 08:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:13.118 08:29:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:15.651 08:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:15.651 08:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:15.651 08:29:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:15.909 08:29:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:16.846 08:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:16.846 08:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:16.846 08:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.846 08:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:17.119 08:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.119 08:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:17.119 08:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:17.119 08:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.686 08:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:17.686 08:29:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:17.686 08:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:17.686 08:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.945 08:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.946 08:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:17.946 08:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.946 08:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:18.205 08:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.205 08:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:18.205 08:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:18.205 08:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.465 08:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.465 08:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:18.465 08:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.465 08:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:18.724 08:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.724 08:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:18.724 08:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:18.985 08:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:19.244 08:29:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:20.623 08:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:20.623 08:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:20.623 08:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.623 08:29:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:20.623 08:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:20.623 08:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:20.624 08:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.624 08:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:20.883 08:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.883 08:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:20.883 08:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.883 08:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:21.142 08:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.142 08:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:21.142 08:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.142 08:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:21.400 08:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.400 08:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:21.400 08:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.401 08:29:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:21.659 08:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.659 08:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:21.659 08:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.659 08:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:21.918 08:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.919 08:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:21.919 08:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:22.178 08:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:22.436 08:29:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:23.814 08:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:23.814 08:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:23.814 08:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.814 08:29:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:23.814 08:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.814 08:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:23.814 08:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:23.814 08:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.072 08:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:24.073 08:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:24.073 08:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.073 08:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:24.331 08:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.331 08:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:24.331 08:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:24.331 08:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.591 08:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.591 08:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:24.591 08:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.591 08:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:25.158 08:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:25.158 08:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:25.158 08:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.158 08:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:25.158 08:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:25.158 08:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:25.158 08:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:25.727 08:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:25.727 08:29:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:26.756 08:29:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:26.756 08:29:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:26.756 08:29:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.756 08:29:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:27.323 08:29:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.323 08:29:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:27.323 08:29:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.323 08:29:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:27.323 08:29:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:27.323 08:29:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:27.323 08:29:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:27.323 08:29:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.582 08:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.582 08:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:27.582 08:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:27.582 08:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.841 08:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.841 08:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:27.841 08:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:27.841 08:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.409 08:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:28.409 08:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:28.409 08:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:28.409 08:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.409 08:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:28.409 08:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:28.409 08:29:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:28.666 08:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:28.925 08:29:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:30.302 08:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:30.302 08:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:30.302 08:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.302 08:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:30.302 08:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:30.302 08:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:30.302 08:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:30.302 08:29:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.871 08:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:30.871 08:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:30.871 08:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:30.871 08:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.130 08:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:31.130 08:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:31.130 08:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.130 08:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:31.389 08:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:31.389 08:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:31.389 08:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.389 08:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:31.648 08:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:31.648 08:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:31.648 08:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.648 08:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:31.905 08:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:31.905 08:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:31.905 08:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:32.163 08:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:32.422 08:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:33.360 08:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:33.360 08:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:33.360 08:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.360 08:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:33.619 08:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:33.619 08:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:33.619 08:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:33.619 08:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.878 08:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.878 08:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:33.878 08:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.878 08:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:34.137 08:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:34.137 08:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:34.137 08:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.137 08:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:34.396 08:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:34.396 08:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:34.396 08:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.396 08:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:34.655 08:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:34.655 08:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:34.655 08:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.655 08:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:34.915 08:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:34.915 08:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:35.175 08:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:35.175 08:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:35.435 08:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:35.695 08:29:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:37.073 08:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:37.073 08:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:37.073 08:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:37.073 08:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.073 08:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.073 08:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:37.073 08:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:37.073 08:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.642 08:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.642 08:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:37.642 08:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.642 08:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:37.642 08:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.642 08:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:37.642 08:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.642 08:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:37.901 08:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.901 08:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:37.901 08:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.901 08:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:38.160 08:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.160 08:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:38.160 08:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.160 08:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:38.419 08:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.419 08:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:38.419 08:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:38.678 08:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:39.248 08:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:40.200 08:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:40.200 08:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:40.201 08:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.201 08:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:40.460 08:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:40.460 08:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:40.460 08:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.460 08:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:40.718 08:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.718 08:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:40.718 08:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.718 08:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:40.977 08:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.977 08:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:40.977 08:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.977 08:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:41.236 08:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:41.236 08:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:41.236 08:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.236 08:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:41.494 08:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:41.495 08:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:41.495 08:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.495 08:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:41.753 08:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:41.753 08:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:41.753 08:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:42.012 08:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:42.271 08:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:43.206 08:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:43.206 08:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:43.206 08:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.206 08:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:43.464 08:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.464 08:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:43.464 08:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.464 08:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:43.724 08:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.724 08:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:43.724 08:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.724 08:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:43.984 08:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.984 08:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:43.984 08:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.984 08:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:44.243 08:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.243 08:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:44.243 08:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.243 08:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:44.502 08:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.502 08:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:44.502 08:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.502 08:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:45.070 08:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:45.071 08:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:45.071 08:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:45.330 08:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:45.619 08:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:46.575 08:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:46.575 08:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:46.575 08:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.575 08:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:46.834 08:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:46.834 08:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:46.834 08:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.834 08:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:47.093 08:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:47.093 08:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:47.093 08:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.093 08:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:47.352 08:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.352 08:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:47.352 08:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:47.352 08:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.611 08:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.611 08:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:47.611 08:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.611 08:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:47.869 08:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.869 08:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:47.869 08:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.869 08:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:48.129 08:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:48.129 08:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76571 00:16:48.129 08:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' -z 76571 ']' 00:16:48.129 08:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@961 -- # kill -0 76571 00:16:48.129 08:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # uname 00:16:48.129 08:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:16:48.129 08:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 76571 00:16:48.129 killing process with pid 76571 00:16:48.129 08:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@963 -- # process_name=reactor_2 00:16:48.129 08:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # '[' reactor_2 = sudo ']' 00:16:48.129 08:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@975 -- # echo 'killing process with pid 76571' 00:16:48.129 08:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # kill 76571 00:16:48.129 08:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@981 -- # wait 76571 00:16:48.129 { 00:16:48.129 "results": [ 00:16:48.129 { 00:16:48.129 "job": "Nvme0n1", 00:16:48.129 "core_mask": "0x4", 00:16:48.129 "workload": "verify", 00:16:48.129 "status": "terminated", 00:16:48.129 "verify_range": { 00:16:48.129 "start": 0, 00:16:48.129 "length": 16384 00:16:48.129 }, 00:16:48.129 "queue_depth": 128, 00:16:48.129 "io_size": 4096, 00:16:48.129 "runtime": 34.907006, 00:16:48.129 "iops": 8595.122709750587, 00:16:48.129 "mibps": 33.57469808496323, 00:16:48.129 "io_failed": 0, 00:16:48.129 "io_timeout": 0, 00:16:48.129 "avg_latency_us": 14860.688831092648, 00:16:48.129 "min_latency_us": 404.01454545454544, 00:16:48.129 "max_latency_us": 4026531.84 00:16:48.129 } 00:16:48.129 ], 00:16:48.129 "core_count": 1 00:16:48.129 } 00:16:48.392 08:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76571 00:16:48.392 08:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:48.392 [2024-11-20 08:28:59.148557] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:16:48.392 [2024-11-20 08:28:59.148698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76571 ] 00:16:48.392 [2024-11-20 08:28:59.300886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.392 [2024-11-20 08:28:59.366815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.392 [2024-11-20 08:28:59.421599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:48.392 Running I/O for 90 seconds... 00:16:48.392 6871.00 IOPS, 26.84 MiB/s [2024-11-20T08:29:35.953Z] 7627.50 IOPS, 29.79 MiB/s [2024-11-20T08:29:35.953Z] 7907.67 IOPS, 30.89 MiB/s [2024-11-20T08:29:35.953Z] 8049.75 IOPS, 31.44 MiB/s [2024-11-20T08:29:35.953Z] 8156.60 IOPS, 31.86 MiB/s [2024-11-20T08:29:35.953Z] 8251.33 IOPS, 32.23 MiB/s [2024-11-20T08:29:35.953Z] 8305.57 IOPS, 32.44 MiB/s [2024-11-20T08:29:35.953Z] 8343.38 IOPS, 32.59 MiB/s [2024-11-20T08:29:35.953Z] 8361.22 IOPS, 32.66 MiB/s [2024-11-20T08:29:35.953Z] 8394.60 IOPS, 32.79 MiB/s [2024-11-20T08:29:35.953Z] 8407.82 IOPS, 32.84 MiB/s [2024-11-20T08:29:35.953Z] 8424.83 IOPS, 32.91 MiB/s [2024-11-20T08:29:35.953Z] 8440.77 IOPS, 32.97 MiB/s [2024-11-20T08:29:35.953Z] 8460.71 IOPS, 33.05 MiB/s [2024-11-20T08:29:35.953Z] 8526.00 IOPS, 33.30 MiB/s [2024-11-20T08:29:35.953Z] [2024-11-20 08:29:16.199464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.392 [2024-11-20 08:29:16.199531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:48.392 [2024-11-20 08:29:16.199622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.392 [2024-11-20 08:29:16.199645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.199669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.393 [2024-11-20 08:29:16.199687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.199709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.393 [2024-11-20 08:29:16.199726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.199748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.393 [2024-11-20 08:29:16.199764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.199787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.393 [2024-11-20 08:29:16.199819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.199845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.393 [2024-11-20 08:29:16.199861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.199888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.393 [2024-11-20 08:29:16.199906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.199933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.393 [2024-11-20 08:29:16.199950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.200002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.393 [2024-11-20 08:29:16.200020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.200043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.393 [2024-11-20 08:29:16.200059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.200083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.393 [2024-11-20 08:29:16.200100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.200123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.393 [2024-11-20 08:29:16.200139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.200161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.393 [2024-11-20 08:29:16.200178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.200200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.393 [2024-11-20 08:29:16.200216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.200238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.393 [2024-11-20 08:29:16.200254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.200276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.393 [2024-11-20 08:29:16.200292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.200314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.393 [2024-11-20 08:29:16.200330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.200352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.393 [2024-11-20 08:29:16.200367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.200390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.393 [2024-11-20 08:29:16.200405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.200428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.393 [2024-11-20 08:29:16.200444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.200478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.393 [2024-11-20 08:29:16.200512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.200533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.393 [2024-11-20 08:29:16.200550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.200572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.393 [2024-11-20 08:29:16.200588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.200631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.393 [2024-11-20 08:29:16.200651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.200674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.393 [2024-11-20 08:29:16.200690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.200712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.393 [2024-11-20 08:29:16.200728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:48.393 [2024-11-20 08:29:16.200751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.393 [2024-11-20 08:29:16.200767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.200789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.394 [2024-11-20 08:29:16.200805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.200858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.394 [2024-11-20 08:29:16.200877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.200899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.394 [2024-11-20 08:29:16.200916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.200938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.394 [2024-11-20 08:29:16.200955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.200977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.394 [2024-11-20 08:29:16.200994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.201016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.394 [2024-11-20 08:29:16.201044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.201068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.394 [2024-11-20 08:29:16.201085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.201108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.394 [2024-11-20 08:29:16.201125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.201147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.394 [2024-11-20 08:29:16.201164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.201187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.394 [2024-11-20 08:29:16.201203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.201240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.394 [2024-11-20 08:29:16.201256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.201278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.394 [2024-11-20 08:29:16.201293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.201315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.394 [2024-11-20 08:29:16.201331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.201353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.394 [2024-11-20 08:29:16.201369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.201391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.394 [2024-11-20 08:29:16.201406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.201429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.394 [2024-11-20 08:29:16.201462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.201484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.394 [2024-11-20 08:29:16.201500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.201523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.394 [2024-11-20 08:29:16.201548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.201572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.394 [2024-11-20 08:29:16.201589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.201612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.394 [2024-11-20 08:29:16.201629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.201673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.394 [2024-11-20 08:29:16.201699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.201722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.394 [2024-11-20 08:29:16.201739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.201761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.394 [2024-11-20 08:29:16.201777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.201799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.394 [2024-11-20 08:29:16.201816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.201851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.394 [2024-11-20 08:29:16.201870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.201893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.394 [2024-11-20 08:29:16.201909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.201932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.394 [2024-11-20 08:29:16.201949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:48.394 [2024-11-20 08:29:16.201971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.395 [2024-11-20 08:29:16.201987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.395 [2024-11-20 08:29:16.202026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.395 [2024-11-20 08:29:16.202065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.395 [2024-11-20 08:29:16.202115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.395 [2024-11-20 08:29:16.202155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.395 [2024-11-20 08:29:16.202194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.395 [2024-11-20 08:29:16.202250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.395 [2024-11-20 08:29:16.202288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.395 [2024-11-20 08:29:16.202325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.395 [2024-11-20 08:29:16.202363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.395 [2024-11-20 08:29:16.202401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.395 [2024-11-20 08:29:16.202438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.395 [2024-11-20 08:29:16.202476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.395 [2024-11-20 08:29:16.202514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.395 [2024-11-20 08:29:16.202551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.395 [2024-11-20 08:29:16.202616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.395 [2024-11-20 08:29:16.202655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.395 [2024-11-20 08:29:16.202694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.395 [2024-11-20 08:29:16.202733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.395 [2024-11-20 08:29:16.202772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.395 [2024-11-20 08:29:16.202819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.395 [2024-11-20 08:29:16.202875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.395 [2024-11-20 08:29:16.202915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.395 [2024-11-20 08:29:16.202957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.202980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.395 [2024-11-20 08:29:16.202996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.203022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.395 [2024-11-20 08:29:16.203040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.203063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.395 [2024-11-20 08:29:16.203080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.203102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.395 [2024-11-20 08:29:16.203127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.203151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.395 [2024-11-20 08:29:16.203168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.203191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.395 [2024-11-20 08:29:16.203207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:48.395 [2024-11-20 08:29:16.203230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.396 [2024-11-20 08:29:16.203246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.203268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.396 [2024-11-20 08:29:16.203285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.203307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.396 [2024-11-20 08:29:16.203323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.203346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.396 [2024-11-20 08:29:16.203362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.203384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.396 [2024-11-20 08:29:16.203401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.203423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.396 [2024-11-20 08:29:16.203440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.203467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.396 [2024-11-20 08:29:16.203485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.203508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.396 [2024-11-20 08:29:16.203525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.203547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.396 [2024-11-20 08:29:16.203582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.203606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.396 [2024-11-20 08:29:16.203631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.203665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.396 [2024-11-20 08:29:16.203683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.203705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.396 [2024-11-20 08:29:16.203722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.203744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.396 [2024-11-20 08:29:16.203761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.203783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.396 [2024-11-20 08:29:16.203811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.203838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.396 [2024-11-20 08:29:16.203855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.203878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.396 [2024-11-20 08:29:16.203895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.203917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.396 [2024-11-20 08:29:16.203934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.203956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.396 [2024-11-20 08:29:16.203972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.203995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.396 [2024-11-20 08:29:16.204011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.204033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.396 [2024-11-20 08:29:16.204050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.204072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.396 [2024-11-20 08:29:16.204088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.204111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.396 [2024-11-20 08:29:16.204127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.204179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.396 [2024-11-20 08:29:16.204197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.204219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.396 [2024-11-20 08:29:16.204235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.204257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.396 [2024-11-20 08:29:16.204274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.204986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.396 [2024-11-20 08:29:16.205016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.205052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.396 [2024-11-20 08:29:16.205071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.205101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.396 [2024-11-20 08:29:16.205117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.205147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.396 [2024-11-20 08:29:16.205163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.205192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.396 [2024-11-20 08:29:16.205209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.205239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.396 [2024-11-20 08:29:16.205256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.205285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.396 [2024-11-20 08:29:16.205301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.205331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.396 [2024-11-20 08:29:16.205348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.205392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.396 [2024-11-20 08:29:16.205413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:48.396 [2024-11-20 08:29:16.205456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.396 [2024-11-20 08:29:16.205475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:16.205506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.397 [2024-11-20 08:29:16.205523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:16.205552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.397 [2024-11-20 08:29:16.205568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:16.205597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.397 [2024-11-20 08:29:16.205614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:16.205643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.397 [2024-11-20 08:29:16.205660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:16.205689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.397 [2024-11-20 08:29:16.205705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:16.205734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.397 [2024-11-20 08:29:16.205750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:16.205779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.397 [2024-11-20 08:29:16.205796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:16.205841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.397 [2024-11-20 08:29:16.205858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:48.397 8264.12 IOPS, 32.28 MiB/s [2024-11-20T08:29:35.958Z] 7778.00 IOPS, 30.38 MiB/s [2024-11-20T08:29:35.958Z] 7345.89 IOPS, 28.69 MiB/s [2024-11-20T08:29:35.958Z] 6959.26 IOPS, 27.18 MiB/s [2024-11-20T08:29:35.958Z] 6860.95 IOPS, 26.80 MiB/s [2024-11-20T08:29:35.958Z] 6979.19 IOPS, 27.26 MiB/s [2024-11-20T08:29:35.958Z] 7087.41 IOPS, 27.69 MiB/s [2024-11-20T08:29:35.958Z] 7278.96 IOPS, 28.43 MiB/s [2024-11-20T08:29:35.958Z] 7504.83 IOPS, 29.32 MiB/s [2024-11-20T08:29:35.958Z] 7687.92 IOPS, 30.03 MiB/s [2024-11-20T08:29:35.958Z] 7810.31 IOPS, 30.51 MiB/s [2024-11-20T08:29:35.958Z] 7861.78 IOPS, 30.71 MiB/s [2024-11-20T08:29:35.958Z] 7909.29 IOPS, 30.90 MiB/s [2024-11-20T08:29:35.958Z] 7969.24 IOPS, 31.13 MiB/s [2024-11-20T08:29:35.958Z] 8160.63 IOPS, 31.88 MiB/s [2024-11-20T08:29:35.958Z] 8339.84 IOPS, 32.58 MiB/s [2024-11-20T08:29:35.958Z] 8495.88 IOPS, 33.19 MiB/s [2024-11-20T08:29:35.958Z] [2024-11-20 08:29:32.933620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.397 [2024-11-20 08:29:32.933689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.933749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.397 [2024-11-20 08:29:32.933797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.933841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.397 [2024-11-20 08:29:32.933859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.934998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.397 [2024-11-20 08:29:32.935028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.935057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.397 [2024-11-20 08:29:32.935076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.935099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.397 [2024-11-20 08:29:32.935115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.935137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.397 [2024-11-20 08:29:32.935154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.935176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.397 [2024-11-20 08:29:32.935192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.935214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.397 [2024-11-20 08:29:32.935230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.935252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.397 [2024-11-20 08:29:32.935268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.935290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.397 [2024-11-20 08:29:32.935306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.935328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.397 [2024-11-20 08:29:32.935343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.935365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.397 [2024-11-20 08:29:32.935381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.935403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.397 [2024-11-20 08:29:32.935418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.935456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.397 [2024-11-20 08:29:32.935474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.935497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.397 [2024-11-20 08:29:32.935513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.935535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.397 [2024-11-20 08:29:32.935552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.935587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.397 [2024-11-20 08:29:32.935604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.935626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.397 [2024-11-20 08:29:32.935643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.935664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.397 [2024-11-20 08:29:32.935681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.935704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.397 [2024-11-20 08:29:32.935720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.935742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.397 [2024-11-20 08:29:32.935758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.935781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.397 [2024-11-20 08:29:32.935797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.935835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.397 [2024-11-20 08:29:32.935854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.935876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.397 [2024-11-20 08:29:32.935893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.935915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.397 [2024-11-20 08:29:32.935932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.935965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.397 [2024-11-20 08:29:32.935983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:48.397 [2024-11-20 08:29:32.936006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.398 [2024-11-20 08:29:32.936022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.398 [2024-11-20 08:29:32.936061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.398 [2024-11-20 08:29:32.936100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.398 [2024-11-20 08:29:32.936138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.398 [2024-11-20 08:29:32.936177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.398 [2024-11-20 08:29:32.936216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.398 [2024-11-20 08:29:32.936255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.398 [2024-11-20 08:29:32.936294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.398 [2024-11-20 08:29:32.936333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.398 [2024-11-20 08:29:32.936372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.398 [2024-11-20 08:29:32.936411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.398 [2024-11-20 08:29:32.936458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.398 [2024-11-20 08:29:32.936499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.398 [2024-11-20 08:29:32.936538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.398 [2024-11-20 08:29:32.936577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.398 [2024-11-20 08:29:32.936616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.398 [2024-11-20 08:29:32.936654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.398 [2024-11-20 08:29:32.936693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.398 [2024-11-20 08:29:32.936732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.398 [2024-11-20 08:29:32.936770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.398 [2024-11-20 08:29:32.936822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.398 [2024-11-20 08:29:32.936863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.398 [2024-11-20 08:29:32.936902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.398 [2024-11-20 08:29:32.936952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.936977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.398 [2024-11-20 08:29:32.936994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.937017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.398 [2024-11-20 08:29:32.937033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.937055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.398 [2024-11-20 08:29:32.937071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.937094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.398 [2024-11-20 08:29:32.937110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.937133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.398 [2024-11-20 08:29:32.937150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.937172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.398 [2024-11-20 08:29:32.937188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.937211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.398 [2024-11-20 08:29:32.937227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.937250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.398 [2024-11-20 08:29:32.937266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:48.398 [2024-11-20 08:29:32.937289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.398 [2024-11-20 08:29:32.937305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:48.399 [2024-11-20 08:29:32.937345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.399 [2024-11-20 08:29:32.937366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:48.399 [2024-11-20 08:29:32.937390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:48.399 [2024-11-20 08:29:32.937407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:48.399 8540.06 IOPS, 33.36 MiB/s [2024-11-20T08:29:35.960Z] 8572.18 IOPS, 33.49 MiB/s [2024-11-20T08:29:35.960Z] Received shutdown signal, test time was about 34.907869 seconds 00:16:48.399 00:16:48.399 Latency(us) 00:16:48.399 [2024-11-20T08:29:35.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.399 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:48.399 Verification LBA range: start 0x0 length 0x4000 00:16:48.399 Nvme0n1 : 34.91 8595.12 33.57 0.00 0.00 14860.69 404.01 4026531.84 00:16:48.399 [2024-11-20T08:29:35.960Z] =================================================================================================================== 00:16:48.399 [2024-11-20T08:29:35.960Z] Total : 8595.12 33.57 0.00 0.00 14860.69 404.01 4026531.84 00:16:48.399 08:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:48.658 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:48.658 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:48.658 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:48.658 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:48.658 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:16:48.658 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:48.658 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:16:48.658 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:48.658 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:48.658 rmmod nvme_tcp 00:16:48.658 rmmod nvme_fabrics 00:16:48.658 rmmod nvme_keyring 00:16:48.658 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:48.658 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:16:48.658 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:16:48.658 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76517 ']' 00:16:48.658 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76517 00:16:48.658 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' -z 76517 ']' 00:16:48.658 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@961 -- # kill -0 76517 00:16:48.658 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # uname 00:16:48.917 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:16:48.917 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 76517 00:16:48.917 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:16:48.917 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:16:48.917 killing process with pid 76517 00:16:48.917 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@975 -- # echo 'killing process with pid 76517' 00:16:48.917 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # kill 76517 00:16:48.917 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@981 -- # wait 76517 00:16:48.917 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:48.917 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:48.917 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:48.917 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:16:48.917 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:16:48.917 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:48.917 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:16:48.917 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:48.917 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:48.917 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:49.177 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:49.177 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:49.177 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:49.177 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:49.177 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:49.177 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:49.177 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:49.177 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:49.177 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:49.177 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:49.177 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:49.177 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:49.177 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:49.177 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.177 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:49.177 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.177 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:16:49.177 00:16:49.177 real 0m40.768s 00:16:49.177 user 2m12.332s 00:16:49.177 sys 0m11.805s 00:16:49.177 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1133 -- # xtrace_disable 00:16:49.177 08:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:49.177 ************************************ 00:16:49.177 END TEST nvmf_host_multipath_status 00:16:49.177 ************************************ 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1114 -- # xtrace_disable 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.437 ************************************ 00:16:49.437 START TEST nvmf_discovery_remove_ifc 00:16:49.437 ************************************ 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:49.437 * Looking for test storage... 00:16:49.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1638 -- # lcov --version 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:16:49.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.437 --rc genhtml_branch_coverage=1 00:16:49.437 --rc genhtml_function_coverage=1 00:16:49.437 --rc genhtml_legend=1 00:16:49.437 --rc geninfo_all_blocks=1 00:16:49.437 --rc geninfo_unexecuted_blocks=1 00:16:49.437 00:16:49.437 ' 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:16:49.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.437 --rc genhtml_branch_coverage=1 00:16:49.437 --rc genhtml_function_coverage=1 00:16:49.437 --rc genhtml_legend=1 00:16:49.437 --rc geninfo_all_blocks=1 00:16:49.437 --rc geninfo_unexecuted_blocks=1 00:16:49.437 00:16:49.437 ' 00:16:49.437 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:16:49.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.437 --rc genhtml_branch_coverage=1 00:16:49.437 --rc genhtml_function_coverage=1 00:16:49.437 --rc genhtml_legend=1 00:16:49.437 --rc geninfo_all_blocks=1 00:16:49.437 --rc geninfo_unexecuted_blocks=1 00:16:49.437 00:16:49.437 ' 00:16:49.438 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:16:49.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.438 --rc genhtml_branch_coverage=1 00:16:49.438 --rc genhtml_function_coverage=1 00:16:49.438 --rc genhtml_legend=1 00:16:49.438 --rc geninfo_all_blocks=1 00:16:49.438 --rc geninfo_unexecuted_blocks=1 00:16:49.438 00:16:49.438 ' 00:16:49.438 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:49.438 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:49.438 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.438 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.438 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.438 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.438 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.438 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.438 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.438 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.438 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.438 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.438 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:16:49.438 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:16:49.438 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.438 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.438 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:49.438 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:49.438 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:49.438 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:49.697 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.697 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.697 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.697 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.698 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.698 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.698 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:49.698 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.698 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:16:49.698 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:49.698 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:49.698 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:49.698 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.698 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.698 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:49.698 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:49.698 08:29:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:49.698 Cannot find device "nvmf_init_br" 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:49.698 Cannot find device "nvmf_init_br2" 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:49.698 Cannot find device "nvmf_tgt_br" 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:49.698 Cannot find device "nvmf_tgt_br2" 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:49.698 Cannot find device "nvmf_init_br" 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:49.698 Cannot find device "nvmf_init_br2" 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:49.698 Cannot find device "nvmf_tgt_br" 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:49.698 Cannot find device "nvmf_tgt_br2" 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:49.698 Cannot find device "nvmf_br" 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:49.698 Cannot find device "nvmf_init_if" 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:49.698 Cannot find device "nvmf_init_if2" 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:49.698 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:49.698 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:49.698 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:49.699 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:49.958 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:49.958 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:49.958 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:49.958 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:49.958 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:49.958 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:49.958 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:49.958 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:49.958 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:49.958 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:49.958 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:49.958 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:49.958 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:49.958 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:49.958 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:49.958 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:16:49.958 00:16:49.958 --- 10.0.0.3 ping statistics --- 00:16:49.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.958 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:49.959 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:49.959 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:16:49.959 00:16:49.959 --- 10.0.0.4 ping statistics --- 00:16:49.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.959 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:49.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:49.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:16:49.959 00:16:49.959 --- 10.0.0.1 ping statistics --- 00:16:49.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.959 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:49.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:49.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:16:49.959 00:16:49.959 --- 10.0.0.2 ping statistics --- 00:16:49.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.959 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77420 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77420 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # '[' -z 77420 ']' 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@843 -- # local max_retries=100 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@847 -- # xtrace_disable 00:16:49.959 08:29:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:49.959 [2024-11-20 08:29:37.451075] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:16:49.959 [2024-11-20 08:29:37.451181] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.218 [2024-11-20 08:29:37.603924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.218 [2024-11-20 08:29:37.668951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:50.218 [2024-11-20 08:29:37.669004] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:50.218 [2024-11-20 08:29:37.669018] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:50.218 [2024-11-20 08:29:37.669028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:50.218 [2024-11-20 08:29:37.669037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:50.218 [2024-11-20 08:29:37.669485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.218 [2024-11-20 08:29:37.730461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:51.155 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:16:51.155 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@871 -- # return 0 00:16:51.155 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:51.155 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@735 -- # xtrace_disable 00:16:51.155 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:51.155 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.155 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:51.156 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:51.156 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:51.156 [2024-11-20 08:29:38.543145] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.156 [2024-11-20 08:29:38.551348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:16:51.156 null0 00:16:51.156 [2024-11-20 08:29:38.583189] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:51.156 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:51.156 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77452 00:16:51.156 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:51.156 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77452 /tmp/host.sock 00:16:51.156 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # '[' -z 77452 ']' 00:16:51.156 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # local rpc_addr=/tmp/host.sock 00:16:51.156 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@843 -- # local max_retries=100 00:16:51.156 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:51.156 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:51.156 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@847 -- # xtrace_disable 00:16:51.156 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:51.156 [2024-11-20 08:29:38.669329] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:16:51.156 [2024-11-20 08:29:38.669424] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77452 ] 00:16:51.415 [2024-11-20 08:29:38.816211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.415 [2024-11-20 08:29:38.870555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.415 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:16:51.415 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@871 -- # return 0 00:16:51.415 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:51.415 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:51.415 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:51.415 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:51.415 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:51.415 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:51.415 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:51.415 08:29:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:51.674 [2024-11-20 08:29:38.977598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:51.674 08:29:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:51.674 08:29:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:51.674 08:29:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:51.674 08:29:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:52.610 [2024-11-20 08:29:40.036263] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:52.610 [2024-11-20 08:29:40.036305] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:52.610 [2024-11-20 08:29:40.036329] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:52.610 [2024-11-20 08:29:40.042310] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:52.610 [2024-11-20 08:29:40.096726] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:16:52.610 [2024-11-20 08:29:40.097741] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1c88fc0:1 started. 00:16:52.610 [2024-11-20 08:29:40.099619] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:52.610 [2024-11-20 08:29:40.099681] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:52.610 [2024-11-20 08:29:40.099711] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:52.610 [2024-11-20 08:29:40.099729] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:52.610 [2024-11-20 08:29:40.099756] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:52.610 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:52.610 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:52.610 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:52.610 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:52.610 [2024-11-20 08:29:40.104952] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1c88fc0 was disconnected and freed. delete nvme_qpair. 00:16:52.610 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:52.610 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:52.610 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:52.610 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:52.610 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:52.610 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:52.610 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:52.610 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:16:52.610 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:52.870 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:52.870 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:52.870 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:52.870 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:52.870 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:52.870 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:52.870 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:52.870 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:52.870 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:52.870 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:52.870 08:29:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:53.806 08:29:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:53.806 08:29:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:53.806 08:29:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:53.806 08:29:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:53.806 08:29:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:53.806 08:29:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:53.806 08:29:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:53.806 08:29:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:53.806 08:29:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:53.806 08:29:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:55.183 08:29:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:55.183 08:29:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:55.183 08:29:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:55.183 08:29:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:55.183 08:29:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:55.183 08:29:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:55.183 08:29:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:55.183 08:29:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:55.183 08:29:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:55.183 08:29:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:56.119 08:29:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:56.119 08:29:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:56.119 08:29:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:56.119 08:29:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:56.119 08:29:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:56.119 08:29:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:56.119 08:29:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:56.119 08:29:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:56.119 08:29:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:56.119 08:29:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:57.057 08:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:57.057 08:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:57.057 08:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:57.057 08:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:57.057 08:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:57.057 08:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:57.057 08:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:57.057 08:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:57.057 08:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:57.057 08:29:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:57.990 08:29:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:57.990 08:29:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:57.990 08:29:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:57.990 08:29:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:57.990 08:29:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:57.990 08:29:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:57.990 08:29:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:57.990 08:29:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:57.990 [2024-11-20 08:29:45.527807] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:57.990 [2024-11-20 08:29:45.527874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.990 [2024-11-20 08:29:45.527890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.990 [2024-11-20 08:29:45.527903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.990 [2024-11-20 08:29:45.527913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.990 [2024-11-20 08:29:45.527923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.990 [2024-11-20 08:29:45.527933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.990 [2024-11-20 08:29:45.527942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.990 [2024-11-20 08:29:45.527951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.990 [2024-11-20 08:29:45.527962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.990 [2024-11-20 08:29:45.527971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.990 [2024-11-20 08:29:45.527980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c65240 is same with the state(6) to be set 00:16:57.990 [2024-11-20 08:29:45.537803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c65240 (9): Bad file descriptor 00:16:57.990 08:29:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:57.990 08:29:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:57.990 [2024-11-20 08:29:45.547831] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:16:57.990 [2024-11-20 08:29:45.547856] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:16:57.990 [2024-11-20 08:29:45.547863] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:57.990 [2024-11-20 08:29:45.547869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:57.991 [2024-11-20 08:29:45.547909] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:16:59.365 08:29:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:59.365 08:29:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:59.365 08:29:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:59.365 08:29:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@566 -- # xtrace_disable 00:16:59.365 08:29:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:59.365 08:29:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:59.365 08:29:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:59.365 [2024-11-20 08:29:46.586908] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:59.365 [2024-11-20 08:29:46.586983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c65240 with addr=10.0.0.3, port=4420 00:16:59.365 [2024-11-20 08:29:46.587019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c65240 is same with the state(6) to be set 00:16:59.365 [2024-11-20 08:29:46.587119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c65240 (9): Bad file descriptor 00:16:59.365 [2024-11-20 08:29:46.587906] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:16:59.365 [2024-11-20 08:29:46.587984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:16:59.365 [2024-11-20 08:29:46.588005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:16:59.365 [2024-11-20 08:29:46.588025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:16:59.365 [2024-11-20 08:29:46.588044] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:16:59.365 [2024-11-20 08:29:46.588056] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:16:59.365 [2024-11-20 08:29:46.588066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:16:59.365 [2024-11-20 08:29:46.588084] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:59.365 [2024-11-20 08:29:46.588096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:59.365 08:29:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:16:59.365 08:29:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:59.365 08:29:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:00.323 [2024-11-20 08:29:47.588154] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:17:00.323 [2024-11-20 08:29:47.588193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:17:00.323 [2024-11-20 08:29:47.588216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:17:00.323 [2024-11-20 08:29:47.588226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:17:00.323 [2024-11-20 08:29:47.588236] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:17:00.323 [2024-11-20 08:29:47.588246] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:17:00.323 [2024-11-20 08:29:47.588252] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:17:00.323 [2024-11-20 08:29:47.588257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:17:00.323 [2024-11-20 08:29:47.588301] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:17:00.323 [2024-11-20 08:29:47.588359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.323 [2024-11-20 08:29:47.588390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.323 [2024-11-20 08:29:47.588435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.323 [2024-11-20 08:29:47.588445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.323 [2024-11-20 08:29:47.588455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.323 [2024-11-20 08:29:47.588464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.323 [2024-11-20 08:29:47.588474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.323 [2024-11-20 08:29:47.588483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.323 [2024-11-20 08:29:47.588493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:00.323 [2024-11-20 08:29:47.588501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.323 [2024-11-20 08:29:47.588511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:17:00.323 [2024-11-20 08:29:47.588551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf0a20 (9): Bad file descriptor 00:17:00.323 [2024-11-20 08:29:47.589541] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:00.323 [2024-11-20 08:29:47.589564] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:17:00.323 08:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:00.323 08:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:00.323 08:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:00.323 08:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:00.323 08:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:00.323 08:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:00.323 08:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:00.323 08:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:00.323 08:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:00.323 08:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:00.323 08:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:00.323 08:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:00.323 08:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:00.323 08:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:00.323 08:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:00.323 08:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:00.323 08:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:00.323 08:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:00.323 08:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:00.323 08:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:00.323 08:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:00.323 08:29:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:01.261 08:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:01.261 08:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:01.261 08:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:01.261 08:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:01.261 08:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:01.261 08:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:01.261 08:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:01.261 08:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:01.261 08:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:01.261 08:29:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:02.198 [2024-11-20 08:29:49.601135] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:02.198 [2024-11-20 08:29:49.601183] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:02.198 [2024-11-20 08:29:49.601218] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:02.198 [2024-11-20 08:29:49.607170] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:17:02.198 [2024-11-20 08:29:49.661499] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:17:02.198 [2024-11-20 08:29:49.662304] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1c41f00:1 started. 00:17:02.198 [2024-11-20 08:29:49.663685] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:02.198 [2024-11-20 08:29:49.663748] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:02.198 [2024-11-20 08:29:49.663772] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:02.198 [2024-11-20 08:29:49.663788] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:17:02.198 [2024-11-20 08:29:49.663797] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:02.199 [2024-11-20 08:29:49.669621] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1c41f00 was disconnected and freed. delete nvme_qpair. 00:17:02.457 08:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:02.457 08:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:02.457 08:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:02.457 08:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:02.457 08:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:02.457 08:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:02.457 08:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:02.457 08:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:02.457 08:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:02.457 08:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:02.457 08:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77452 00:17:02.457 08:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' -z 77452 ']' 00:17:02.457 08:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@961 -- # kill -0 77452 00:17:02.457 08:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # uname 00:17:02.457 08:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:17:02.457 08:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 77452 00:17:02.457 08:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:17:02.457 killing process with pid 77452 00:17:02.457 08:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:17:02.457 08:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@975 -- # echo 'killing process with pid 77452' 00:17:02.457 08:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # kill 77452 00:17:02.457 08:29:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@981 -- # wait 77452 00:17:02.716 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:02.716 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:02.716 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:17:02.716 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:02.716 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:17:02.716 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:02.716 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:02.716 rmmod nvme_tcp 00:17:02.716 rmmod nvme_fabrics 00:17:02.716 rmmod nvme_keyring 00:17:02.716 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:02.716 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:17:02.716 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:17:02.716 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77420 ']' 00:17:02.716 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77420 00:17:02.716 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' -z 77420 ']' 00:17:02.716 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@961 -- # kill -0 77420 00:17:02.716 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # uname 00:17:02.716 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:17:02.716 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 77420 00:17:02.716 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:17:02.716 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:17:02.716 killing process with pid 77420 00:17:02.716 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@975 -- # echo 'killing process with pid 77420' 00:17:02.716 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # kill 77420 00:17:02.716 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@981 -- # wait 77420 00:17:02.975 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:02.976 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:02.976 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:02.976 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:17:02.976 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:02.976 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:17:02.976 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:17:02.976 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:02.976 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:02.976 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:02.976 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:02.976 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:02.976 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:02.976 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:02.976 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:02.976 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:02.976 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:02.976 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:03.235 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:03.235 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:03.235 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:03.235 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:03.235 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:03.235 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.235 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:03.235 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.235 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:17:03.235 00:17:03.235 real 0m13.906s 00:17:03.235 user 0m23.360s 00:17:03.235 sys 0m2.499s 00:17:03.235 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1133 -- # xtrace_disable 00:17:03.235 08:29:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:03.235 ************************************ 00:17:03.235 END TEST nvmf_discovery_remove_ifc 00:17:03.235 ************************************ 00:17:03.235 08:29:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:03.235 08:29:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:17:03.235 08:29:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1114 -- # xtrace_disable 00:17:03.235 08:29:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.235 ************************************ 00:17:03.235 START TEST nvmf_identify_kernel_target 00:17:03.235 ************************************ 00:17:03.235 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:03.497 * Looking for test storage... 00:17:03.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1638 -- # lcov --version 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:17:03.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.497 --rc genhtml_branch_coverage=1 00:17:03.497 --rc genhtml_function_coverage=1 00:17:03.497 --rc genhtml_legend=1 00:17:03.497 --rc geninfo_all_blocks=1 00:17:03.497 --rc geninfo_unexecuted_blocks=1 00:17:03.497 00:17:03.497 ' 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:17:03.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.497 --rc genhtml_branch_coverage=1 00:17:03.497 --rc genhtml_function_coverage=1 00:17:03.497 --rc genhtml_legend=1 00:17:03.497 --rc geninfo_all_blocks=1 00:17:03.497 --rc geninfo_unexecuted_blocks=1 00:17:03.497 00:17:03.497 ' 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:17:03.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.497 --rc genhtml_branch_coverage=1 00:17:03.497 --rc genhtml_function_coverage=1 00:17:03.497 --rc genhtml_legend=1 00:17:03.497 --rc geninfo_all_blocks=1 00:17:03.497 --rc geninfo_unexecuted_blocks=1 00:17:03.497 00:17:03.497 ' 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:17:03.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.497 --rc genhtml_branch_coverage=1 00:17:03.497 --rc genhtml_function_coverage=1 00:17:03.497 --rc genhtml_legend=1 00:17:03.497 --rc geninfo_all_blocks=1 00:17:03.497 --rc geninfo_unexecuted_blocks=1 00:17:03.497 00:17:03.497 ' 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.497 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:03.498 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:03.498 Cannot find device "nvmf_init_br" 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:03.498 Cannot find device "nvmf_init_br2" 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:03.498 Cannot find device "nvmf_tgt_br" 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:17:03.498 08:29:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:03.498 Cannot find device "nvmf_tgt_br2" 00:17:03.498 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:17:03.498 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:03.498 Cannot find device "nvmf_init_br" 00:17:03.498 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:17:03.498 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:03.498 Cannot find device "nvmf_init_br2" 00:17:03.498 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:17:03.498 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:03.498 Cannot find device "nvmf_tgt_br" 00:17:03.498 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:17:03.498 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:03.498 Cannot find device "nvmf_tgt_br2" 00:17:03.498 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:17:03.498 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:03.758 Cannot find device "nvmf_br" 00:17:03.758 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:17:03.758 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:03.758 Cannot find device "nvmf_init_if" 00:17:03.758 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:17:03.758 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:03.758 Cannot find device "nvmf_init_if2" 00:17:03.758 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:17:03.758 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:03.758 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:03.758 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:17:03.758 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:03.758 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:03.758 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:17:03.758 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:03.758 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:03.759 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:03.759 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:17:03.759 00:17:03.759 --- 10.0.0.3 ping statistics --- 00:17:03.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.759 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:03.759 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:03.759 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:17:03.759 00:17:03.759 --- 10.0.0.4 ping statistics --- 00:17:03.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.759 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:03.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:17:03.759 00:17:03.759 --- 10.0.0.1 ping statistics --- 00:17:03.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.759 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:03.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 ms 00:17:03.759 00:17:03.759 --- 10.0.0.2 ping statistics --- 00:17:03.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.759 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:03.759 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:04.018 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:17:04.018 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:17:04.018 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:17:04.018 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:04.019 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:04.019 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.019 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.019 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:04.019 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.019 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:04.019 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:04.019 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:04.019 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:17:04.019 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:17:04.019 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:17:04.019 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:17:04.019 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:04.019 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:04.019 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:04.019 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:17:04.019 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:17:04.019 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:17:04.019 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:04.019 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:04.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:04.276 Waiting for block devices as requested 00:17:04.276 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:04.534 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:04.534 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:04.534 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:04.534 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:17:04.534 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1595 -- # local device=nvme0n1 00:17:04.534 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:04.534 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:17:04.534 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:17:04.534 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:04.534 08:29:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:04.534 No valid GPT data, bailing 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1595 -- # local device=nvme0n2 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:04.534 No valid GPT data, bailing 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1595 -- # local device=nvme0n3 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:04.534 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:04.793 No valid GPT data, bailing 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1595 -- # local device=nvme1n1 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:04.793 No valid GPT data, bailing 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -a 10.0.0.1 -t tcp -s 4420 00:17:04.793 00:17:04.793 Discovery Log Number of Records 2, Generation counter 2 00:17:04.793 =====Discovery Log Entry 0====== 00:17:04.793 trtype: tcp 00:17:04.793 adrfam: ipv4 00:17:04.793 subtype: current discovery subsystem 00:17:04.793 treq: not specified, sq flow control disable supported 00:17:04.793 portid: 1 00:17:04.793 trsvcid: 4420 00:17:04.793 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:04.793 traddr: 10.0.0.1 00:17:04.793 eflags: none 00:17:04.793 sectype: none 00:17:04.793 =====Discovery Log Entry 1====== 00:17:04.793 trtype: tcp 00:17:04.793 adrfam: ipv4 00:17:04.793 subtype: nvme subsystem 00:17:04.793 treq: not specified, sq flow control disable supported 00:17:04.793 portid: 1 00:17:04.793 trsvcid: 4420 00:17:04.793 subnqn: nqn.2016-06.io.spdk:testnqn 00:17:04.793 traddr: 10.0.0.1 00:17:04.793 eflags: none 00:17:04.793 sectype: none 00:17:04.793 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:17:04.793 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:05.052 ===================================================== 00:17:05.052 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:05.052 ===================================================== 00:17:05.052 Controller Capabilities/Features 00:17:05.052 ================================ 00:17:05.052 Vendor ID: 0000 00:17:05.052 Subsystem Vendor ID: 0000 00:17:05.052 Serial Number: a819e378bb6ac9f2c8d6 00:17:05.052 Model Number: Linux 00:17:05.052 Firmware Version: 6.8.9-20 00:17:05.052 Recommended Arb Burst: 0 00:17:05.052 IEEE OUI Identifier: 00 00 00 00:17:05.052 Multi-path I/O 00:17:05.052 May have multiple subsystem ports: No 00:17:05.052 May have multiple controllers: No 00:17:05.052 Associated with SR-IOV VF: No 00:17:05.052 Max Data Transfer Size: Unlimited 00:17:05.052 Max Number of Namespaces: 0 00:17:05.052 Max Number of I/O Queues: 1024 00:17:05.052 NVMe Specification Version (VS): 1.3 00:17:05.052 NVMe Specification Version (Identify): 1.3 00:17:05.052 Maximum Queue Entries: 1024 00:17:05.052 Contiguous Queues Required: No 00:17:05.052 Arbitration Mechanisms Supported 00:17:05.052 Weighted Round Robin: Not Supported 00:17:05.052 Vendor Specific: Not Supported 00:17:05.052 Reset Timeout: 7500 ms 00:17:05.052 Doorbell Stride: 4 bytes 00:17:05.052 NVM Subsystem Reset: Not Supported 00:17:05.052 Command Sets Supported 00:17:05.052 NVM Command Set: Supported 00:17:05.052 Boot Partition: Not Supported 00:17:05.052 Memory Page Size Minimum: 4096 bytes 00:17:05.052 Memory Page Size Maximum: 4096 bytes 00:17:05.052 Persistent Memory Region: Not Supported 00:17:05.052 Optional Asynchronous Events Supported 00:17:05.052 Namespace Attribute Notices: Not Supported 00:17:05.052 Firmware Activation Notices: Not Supported 00:17:05.052 ANA Change Notices: Not Supported 00:17:05.052 PLE Aggregate Log Change Notices: Not Supported 00:17:05.052 LBA Status Info Alert Notices: Not Supported 00:17:05.052 EGE Aggregate Log Change Notices: Not Supported 00:17:05.052 Normal NVM Subsystem Shutdown event: Not Supported 00:17:05.052 Zone Descriptor Change Notices: Not Supported 00:17:05.052 Discovery Log Change Notices: Supported 00:17:05.052 Controller Attributes 00:17:05.052 128-bit Host Identifier: Not Supported 00:17:05.052 Non-Operational Permissive Mode: Not Supported 00:17:05.052 NVM Sets: Not Supported 00:17:05.052 Read Recovery Levels: Not Supported 00:17:05.052 Endurance Groups: Not Supported 00:17:05.052 Predictable Latency Mode: Not Supported 00:17:05.052 Traffic Based Keep ALive: Not Supported 00:17:05.052 Namespace Granularity: Not Supported 00:17:05.052 SQ Associations: Not Supported 00:17:05.052 UUID List: Not Supported 00:17:05.052 Multi-Domain Subsystem: Not Supported 00:17:05.052 Fixed Capacity Management: Not Supported 00:17:05.052 Variable Capacity Management: Not Supported 00:17:05.053 Delete Endurance Group: Not Supported 00:17:05.053 Delete NVM Set: Not Supported 00:17:05.053 Extended LBA Formats Supported: Not Supported 00:17:05.053 Flexible Data Placement Supported: Not Supported 00:17:05.053 00:17:05.053 Controller Memory Buffer Support 00:17:05.053 ================================ 00:17:05.053 Supported: No 00:17:05.053 00:17:05.053 Persistent Memory Region Support 00:17:05.053 ================================ 00:17:05.053 Supported: No 00:17:05.053 00:17:05.053 Admin Command Set Attributes 00:17:05.053 ============================ 00:17:05.053 Security Send/Receive: Not Supported 00:17:05.053 Format NVM: Not Supported 00:17:05.053 Firmware Activate/Download: Not Supported 00:17:05.053 Namespace Management: Not Supported 00:17:05.053 Device Self-Test: Not Supported 00:17:05.053 Directives: Not Supported 00:17:05.053 NVMe-MI: Not Supported 00:17:05.053 Virtualization Management: Not Supported 00:17:05.053 Doorbell Buffer Config: Not Supported 00:17:05.053 Get LBA Status Capability: Not Supported 00:17:05.053 Command & Feature Lockdown Capability: Not Supported 00:17:05.053 Abort Command Limit: 1 00:17:05.053 Async Event Request Limit: 1 00:17:05.053 Number of Firmware Slots: N/A 00:17:05.053 Firmware Slot 1 Read-Only: N/A 00:17:05.053 Firmware Activation Without Reset: N/A 00:17:05.053 Multiple Update Detection Support: N/A 00:17:05.053 Firmware Update Granularity: No Information Provided 00:17:05.053 Per-Namespace SMART Log: No 00:17:05.053 Asymmetric Namespace Access Log Page: Not Supported 00:17:05.053 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:05.053 Command Effects Log Page: Not Supported 00:17:05.053 Get Log Page Extended Data: Supported 00:17:05.053 Telemetry Log Pages: Not Supported 00:17:05.053 Persistent Event Log Pages: Not Supported 00:17:05.053 Supported Log Pages Log Page: May Support 00:17:05.053 Commands Supported & Effects Log Page: Not Supported 00:17:05.053 Feature Identifiers & Effects Log Page:May Support 00:17:05.053 NVMe-MI Commands & Effects Log Page: May Support 00:17:05.053 Data Area 4 for Telemetry Log: Not Supported 00:17:05.053 Error Log Page Entries Supported: 1 00:17:05.053 Keep Alive: Not Supported 00:17:05.053 00:17:05.053 NVM Command Set Attributes 00:17:05.053 ========================== 00:17:05.053 Submission Queue Entry Size 00:17:05.053 Max: 1 00:17:05.053 Min: 1 00:17:05.053 Completion Queue Entry Size 00:17:05.053 Max: 1 00:17:05.053 Min: 1 00:17:05.053 Number of Namespaces: 0 00:17:05.053 Compare Command: Not Supported 00:17:05.053 Write Uncorrectable Command: Not Supported 00:17:05.053 Dataset Management Command: Not Supported 00:17:05.053 Write Zeroes Command: Not Supported 00:17:05.053 Set Features Save Field: Not Supported 00:17:05.053 Reservations: Not Supported 00:17:05.053 Timestamp: Not Supported 00:17:05.053 Copy: Not Supported 00:17:05.053 Volatile Write Cache: Not Present 00:17:05.053 Atomic Write Unit (Normal): 1 00:17:05.053 Atomic Write Unit (PFail): 1 00:17:05.053 Atomic Compare & Write Unit: 1 00:17:05.053 Fused Compare & Write: Not Supported 00:17:05.053 Scatter-Gather List 00:17:05.053 SGL Command Set: Supported 00:17:05.053 SGL Keyed: Not Supported 00:17:05.053 SGL Bit Bucket Descriptor: Not Supported 00:17:05.053 SGL Metadata Pointer: Not Supported 00:17:05.053 Oversized SGL: Not Supported 00:17:05.053 SGL Metadata Address: Not Supported 00:17:05.053 SGL Offset: Supported 00:17:05.053 Transport SGL Data Block: Not Supported 00:17:05.053 Replay Protected Memory Block: Not Supported 00:17:05.053 00:17:05.053 Firmware Slot Information 00:17:05.053 ========================= 00:17:05.053 Active slot: 0 00:17:05.053 00:17:05.053 00:17:05.053 Error Log 00:17:05.053 ========= 00:17:05.053 00:17:05.053 Active Namespaces 00:17:05.053 ================= 00:17:05.053 Discovery Log Page 00:17:05.053 ================== 00:17:05.053 Generation Counter: 2 00:17:05.053 Number of Records: 2 00:17:05.053 Record Format: 0 00:17:05.053 00:17:05.053 Discovery Log Entry 0 00:17:05.053 ---------------------- 00:17:05.053 Transport Type: 3 (TCP) 00:17:05.053 Address Family: 1 (IPv4) 00:17:05.053 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:05.053 Entry Flags: 00:17:05.053 Duplicate Returned Information: 0 00:17:05.053 Explicit Persistent Connection Support for Discovery: 0 00:17:05.053 Transport Requirements: 00:17:05.053 Secure Channel: Not Specified 00:17:05.053 Port ID: 1 (0x0001) 00:17:05.053 Controller ID: 65535 (0xffff) 00:17:05.053 Admin Max SQ Size: 32 00:17:05.053 Transport Service Identifier: 4420 00:17:05.053 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:05.053 Transport Address: 10.0.0.1 00:17:05.053 Discovery Log Entry 1 00:17:05.053 ---------------------- 00:17:05.053 Transport Type: 3 (TCP) 00:17:05.053 Address Family: 1 (IPv4) 00:17:05.053 Subsystem Type: 2 (NVM Subsystem) 00:17:05.053 Entry Flags: 00:17:05.053 Duplicate Returned Information: 0 00:17:05.053 Explicit Persistent Connection Support for Discovery: 0 00:17:05.053 Transport Requirements: 00:17:05.053 Secure Channel: Not Specified 00:17:05.053 Port ID: 1 (0x0001) 00:17:05.053 Controller ID: 65535 (0xffff) 00:17:05.053 Admin Max SQ Size: 32 00:17:05.053 Transport Service Identifier: 4420 00:17:05.053 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:05.053 Transport Address: 10.0.0.1 00:17:05.053 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:05.313 get_feature(0x01) failed 00:17:05.313 get_feature(0x02) failed 00:17:05.313 get_feature(0x04) failed 00:17:05.313 ===================================================== 00:17:05.313 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:05.313 ===================================================== 00:17:05.313 Controller Capabilities/Features 00:17:05.313 ================================ 00:17:05.313 Vendor ID: 0000 00:17:05.313 Subsystem Vendor ID: 0000 00:17:05.313 Serial Number: 75a18e98147ea88606a9 00:17:05.313 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:05.313 Firmware Version: 6.8.9-20 00:17:05.313 Recommended Arb Burst: 6 00:17:05.313 IEEE OUI Identifier: 00 00 00 00:17:05.313 Multi-path I/O 00:17:05.313 May have multiple subsystem ports: Yes 00:17:05.313 May have multiple controllers: Yes 00:17:05.313 Associated with SR-IOV VF: No 00:17:05.313 Max Data Transfer Size: Unlimited 00:17:05.313 Max Number of Namespaces: 1024 00:17:05.313 Max Number of I/O Queues: 128 00:17:05.313 NVMe Specification Version (VS): 1.3 00:17:05.313 NVMe Specification Version (Identify): 1.3 00:17:05.313 Maximum Queue Entries: 1024 00:17:05.313 Contiguous Queues Required: No 00:17:05.313 Arbitration Mechanisms Supported 00:17:05.313 Weighted Round Robin: Not Supported 00:17:05.313 Vendor Specific: Not Supported 00:17:05.313 Reset Timeout: 7500 ms 00:17:05.313 Doorbell Stride: 4 bytes 00:17:05.313 NVM Subsystem Reset: Not Supported 00:17:05.313 Command Sets Supported 00:17:05.313 NVM Command Set: Supported 00:17:05.313 Boot Partition: Not Supported 00:17:05.313 Memory Page Size Minimum: 4096 bytes 00:17:05.313 Memory Page Size Maximum: 4096 bytes 00:17:05.313 Persistent Memory Region: Not Supported 00:17:05.313 Optional Asynchronous Events Supported 00:17:05.313 Namespace Attribute Notices: Supported 00:17:05.313 Firmware Activation Notices: Not Supported 00:17:05.313 ANA Change Notices: Supported 00:17:05.313 PLE Aggregate Log Change Notices: Not Supported 00:17:05.313 LBA Status Info Alert Notices: Not Supported 00:17:05.313 EGE Aggregate Log Change Notices: Not Supported 00:17:05.313 Normal NVM Subsystem Shutdown event: Not Supported 00:17:05.313 Zone Descriptor Change Notices: Not Supported 00:17:05.313 Discovery Log Change Notices: Not Supported 00:17:05.313 Controller Attributes 00:17:05.313 128-bit Host Identifier: Supported 00:17:05.313 Non-Operational Permissive Mode: Not Supported 00:17:05.313 NVM Sets: Not Supported 00:17:05.313 Read Recovery Levels: Not Supported 00:17:05.313 Endurance Groups: Not Supported 00:17:05.313 Predictable Latency Mode: Not Supported 00:17:05.313 Traffic Based Keep ALive: Supported 00:17:05.313 Namespace Granularity: Not Supported 00:17:05.313 SQ Associations: Not Supported 00:17:05.313 UUID List: Not Supported 00:17:05.313 Multi-Domain Subsystem: Not Supported 00:17:05.313 Fixed Capacity Management: Not Supported 00:17:05.313 Variable Capacity Management: Not Supported 00:17:05.313 Delete Endurance Group: Not Supported 00:17:05.313 Delete NVM Set: Not Supported 00:17:05.313 Extended LBA Formats Supported: Not Supported 00:17:05.313 Flexible Data Placement Supported: Not Supported 00:17:05.313 00:17:05.313 Controller Memory Buffer Support 00:17:05.313 ================================ 00:17:05.313 Supported: No 00:17:05.313 00:17:05.313 Persistent Memory Region Support 00:17:05.313 ================================ 00:17:05.313 Supported: No 00:17:05.313 00:17:05.313 Admin Command Set Attributes 00:17:05.313 ============================ 00:17:05.313 Security Send/Receive: Not Supported 00:17:05.314 Format NVM: Not Supported 00:17:05.314 Firmware Activate/Download: Not Supported 00:17:05.314 Namespace Management: Not Supported 00:17:05.314 Device Self-Test: Not Supported 00:17:05.314 Directives: Not Supported 00:17:05.314 NVMe-MI: Not Supported 00:17:05.314 Virtualization Management: Not Supported 00:17:05.314 Doorbell Buffer Config: Not Supported 00:17:05.314 Get LBA Status Capability: Not Supported 00:17:05.314 Command & Feature Lockdown Capability: Not Supported 00:17:05.314 Abort Command Limit: 4 00:17:05.314 Async Event Request Limit: 4 00:17:05.314 Number of Firmware Slots: N/A 00:17:05.314 Firmware Slot 1 Read-Only: N/A 00:17:05.314 Firmware Activation Without Reset: N/A 00:17:05.314 Multiple Update Detection Support: N/A 00:17:05.314 Firmware Update Granularity: No Information Provided 00:17:05.314 Per-Namespace SMART Log: Yes 00:17:05.314 Asymmetric Namespace Access Log Page: Supported 00:17:05.314 ANA Transition Time : 10 sec 00:17:05.314 00:17:05.314 Asymmetric Namespace Access Capabilities 00:17:05.314 ANA Optimized State : Supported 00:17:05.314 ANA Non-Optimized State : Supported 00:17:05.314 ANA Inaccessible State : Supported 00:17:05.314 ANA Persistent Loss State : Supported 00:17:05.314 ANA Change State : Supported 00:17:05.314 ANAGRPID is not changed : No 00:17:05.314 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:05.314 00:17:05.314 ANA Group Identifier Maximum : 128 00:17:05.314 Number of ANA Group Identifiers : 128 00:17:05.314 Max Number of Allowed Namespaces : 1024 00:17:05.314 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:05.314 Command Effects Log Page: Supported 00:17:05.314 Get Log Page Extended Data: Supported 00:17:05.314 Telemetry Log Pages: Not Supported 00:17:05.314 Persistent Event Log Pages: Not Supported 00:17:05.314 Supported Log Pages Log Page: May Support 00:17:05.314 Commands Supported & Effects Log Page: Not Supported 00:17:05.314 Feature Identifiers & Effects Log Page:May Support 00:17:05.314 NVMe-MI Commands & Effects Log Page: May Support 00:17:05.314 Data Area 4 for Telemetry Log: Not Supported 00:17:05.314 Error Log Page Entries Supported: 128 00:17:05.314 Keep Alive: Supported 00:17:05.314 Keep Alive Granularity: 1000 ms 00:17:05.314 00:17:05.314 NVM Command Set Attributes 00:17:05.314 ========================== 00:17:05.314 Submission Queue Entry Size 00:17:05.314 Max: 64 00:17:05.314 Min: 64 00:17:05.314 Completion Queue Entry Size 00:17:05.314 Max: 16 00:17:05.314 Min: 16 00:17:05.314 Number of Namespaces: 1024 00:17:05.314 Compare Command: Not Supported 00:17:05.314 Write Uncorrectable Command: Not Supported 00:17:05.314 Dataset Management Command: Supported 00:17:05.314 Write Zeroes Command: Supported 00:17:05.314 Set Features Save Field: Not Supported 00:17:05.314 Reservations: Not Supported 00:17:05.314 Timestamp: Not Supported 00:17:05.314 Copy: Not Supported 00:17:05.314 Volatile Write Cache: Present 00:17:05.314 Atomic Write Unit (Normal): 1 00:17:05.314 Atomic Write Unit (PFail): 1 00:17:05.314 Atomic Compare & Write Unit: 1 00:17:05.314 Fused Compare & Write: Not Supported 00:17:05.314 Scatter-Gather List 00:17:05.314 SGL Command Set: Supported 00:17:05.314 SGL Keyed: Not Supported 00:17:05.314 SGL Bit Bucket Descriptor: Not Supported 00:17:05.314 SGL Metadata Pointer: Not Supported 00:17:05.314 Oversized SGL: Not Supported 00:17:05.314 SGL Metadata Address: Not Supported 00:17:05.314 SGL Offset: Supported 00:17:05.314 Transport SGL Data Block: Not Supported 00:17:05.314 Replay Protected Memory Block: Not Supported 00:17:05.314 00:17:05.314 Firmware Slot Information 00:17:05.314 ========================= 00:17:05.314 Active slot: 0 00:17:05.314 00:17:05.314 Asymmetric Namespace Access 00:17:05.314 =========================== 00:17:05.314 Change Count : 0 00:17:05.314 Number of ANA Group Descriptors : 1 00:17:05.314 ANA Group Descriptor : 0 00:17:05.314 ANA Group ID : 1 00:17:05.314 Number of NSID Values : 1 00:17:05.314 Change Count : 0 00:17:05.314 ANA State : 1 00:17:05.314 Namespace Identifier : 1 00:17:05.314 00:17:05.314 Commands Supported and Effects 00:17:05.314 ============================== 00:17:05.314 Admin Commands 00:17:05.314 -------------- 00:17:05.314 Get Log Page (02h): Supported 00:17:05.314 Identify (06h): Supported 00:17:05.314 Abort (08h): Supported 00:17:05.314 Set Features (09h): Supported 00:17:05.314 Get Features (0Ah): Supported 00:17:05.314 Asynchronous Event Request (0Ch): Supported 00:17:05.314 Keep Alive (18h): Supported 00:17:05.314 I/O Commands 00:17:05.314 ------------ 00:17:05.314 Flush (00h): Supported 00:17:05.314 Write (01h): Supported LBA-Change 00:17:05.314 Read (02h): Supported 00:17:05.314 Write Zeroes (08h): Supported LBA-Change 00:17:05.314 Dataset Management (09h): Supported 00:17:05.314 00:17:05.314 Error Log 00:17:05.314 ========= 00:17:05.314 Entry: 0 00:17:05.314 Error Count: 0x3 00:17:05.314 Submission Queue Id: 0x0 00:17:05.314 Command Id: 0x5 00:17:05.314 Phase Bit: 0 00:17:05.314 Status Code: 0x2 00:17:05.314 Status Code Type: 0x0 00:17:05.314 Do Not Retry: 1 00:17:05.314 Error Location: 0x28 00:17:05.314 LBA: 0x0 00:17:05.314 Namespace: 0x0 00:17:05.314 Vendor Log Page: 0x0 00:17:05.314 ----------- 00:17:05.314 Entry: 1 00:17:05.314 Error Count: 0x2 00:17:05.314 Submission Queue Id: 0x0 00:17:05.314 Command Id: 0x5 00:17:05.314 Phase Bit: 0 00:17:05.314 Status Code: 0x2 00:17:05.314 Status Code Type: 0x0 00:17:05.314 Do Not Retry: 1 00:17:05.314 Error Location: 0x28 00:17:05.314 LBA: 0x0 00:17:05.314 Namespace: 0x0 00:17:05.314 Vendor Log Page: 0x0 00:17:05.314 ----------- 00:17:05.314 Entry: 2 00:17:05.314 Error Count: 0x1 00:17:05.314 Submission Queue Id: 0x0 00:17:05.314 Command Id: 0x4 00:17:05.314 Phase Bit: 0 00:17:05.314 Status Code: 0x2 00:17:05.314 Status Code Type: 0x0 00:17:05.314 Do Not Retry: 1 00:17:05.314 Error Location: 0x28 00:17:05.314 LBA: 0x0 00:17:05.314 Namespace: 0x0 00:17:05.314 Vendor Log Page: 0x0 00:17:05.314 00:17:05.314 Number of Queues 00:17:05.314 ================ 00:17:05.314 Number of I/O Submission Queues: 128 00:17:05.314 Number of I/O Completion Queues: 128 00:17:05.314 00:17:05.314 ZNS Specific Controller Data 00:17:05.314 ============================ 00:17:05.314 Zone Append Size Limit: 0 00:17:05.314 00:17:05.314 00:17:05.314 Active Namespaces 00:17:05.314 ================= 00:17:05.314 get_feature(0x05) failed 00:17:05.314 Namespace ID:1 00:17:05.314 Command Set Identifier: NVM (00h) 00:17:05.314 Deallocate: Supported 00:17:05.314 Deallocated/Unwritten Error: Not Supported 00:17:05.314 Deallocated Read Value: Unknown 00:17:05.314 Deallocate in Write Zeroes: Not Supported 00:17:05.314 Deallocated Guard Field: 0xFFFF 00:17:05.314 Flush: Supported 00:17:05.314 Reservation: Not Supported 00:17:05.314 Namespace Sharing Capabilities: Multiple Controllers 00:17:05.314 Size (in LBAs): 1310720 (5GiB) 00:17:05.314 Capacity (in LBAs): 1310720 (5GiB) 00:17:05.314 Utilization (in LBAs): 1310720 (5GiB) 00:17:05.314 UUID: fad96841-9459-431f-ae18-5cdd7ce97947 00:17:05.314 Thin Provisioning: Not Supported 00:17:05.314 Per-NS Atomic Units: Yes 00:17:05.314 Atomic Boundary Size (Normal): 0 00:17:05.314 Atomic Boundary Size (PFail): 0 00:17:05.314 Atomic Boundary Offset: 0 00:17:05.314 NGUID/EUI64 Never Reused: No 00:17:05.314 ANA group ID: 1 00:17:05.314 Namespace Write Protected: No 00:17:05.314 Number of LBA Formats: 1 00:17:05.314 Current LBA Format: LBA Format #00 00:17:05.314 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:05.314 00:17:05.314 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:05.315 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:05.315 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:17:05.315 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:05.315 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:17:05.315 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:05.315 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:05.315 rmmod nvme_tcp 00:17:05.315 rmmod nvme_fabrics 00:17:05.315 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:05.315 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:17:05.315 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:17:05.315 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:17:05.315 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:05.315 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:05.315 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:05.315 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:17:05.315 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:05.315 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:17:05.315 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:05.315 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:05.315 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:05.315 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:05.315 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:05.315 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:05.315 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:05.574 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:05.574 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:05.574 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:05.574 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:05.574 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:05.574 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:05.574 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:05.574 08:29:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:05.574 08:29:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:05.574 08:29:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:05.574 08:29:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.574 08:29:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.574 08:29:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.574 08:29:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:17:05.574 08:29:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:05.574 08:29:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:05.574 08:29:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:17:05.574 08:29:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:05.574 08:29:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:05.574 08:29:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:05.574 08:29:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:05.574 08:29:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:17:05.574 08:29:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:17:05.574 08:29:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:06.511 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:06.511 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:06.511 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:06.511 ************************************ 00:17:06.511 END TEST nvmf_identify_kernel_target 00:17:06.511 ************************************ 00:17:06.511 00:17:06.511 real 0m3.299s 00:17:06.511 user 0m1.166s 00:17:06.511 sys 0m1.512s 00:17:06.511 08:29:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1133 -- # xtrace_disable 00:17:06.511 08:29:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.856 08:29:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:06.856 08:29:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:17:06.856 08:29:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1114 -- # xtrace_disable 00:17:06.856 08:29:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.856 ************************************ 00:17:06.856 START TEST nvmf_auth_host 00:17:06.856 ************************************ 00:17:06.856 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:06.856 * Looking for test storage... 00:17:06.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:06.856 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:17:06.856 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1638 -- # lcov --version 00:17:06.856 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:17:06.856 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:17:06.856 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:06.856 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:06.856 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:06.856 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:06.856 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:06.856 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:06.856 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:17:06.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.857 --rc genhtml_branch_coverage=1 00:17:06.857 --rc genhtml_function_coverage=1 00:17:06.857 --rc genhtml_legend=1 00:17:06.857 --rc geninfo_all_blocks=1 00:17:06.857 --rc geninfo_unexecuted_blocks=1 00:17:06.857 00:17:06.857 ' 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:17:06.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.857 --rc genhtml_branch_coverage=1 00:17:06.857 --rc genhtml_function_coverage=1 00:17:06.857 --rc genhtml_legend=1 00:17:06.857 --rc geninfo_all_blocks=1 00:17:06.857 --rc geninfo_unexecuted_blocks=1 00:17:06.857 00:17:06.857 ' 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:17:06.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.857 --rc genhtml_branch_coverage=1 00:17:06.857 --rc genhtml_function_coverage=1 00:17:06.857 --rc genhtml_legend=1 00:17:06.857 --rc geninfo_all_blocks=1 00:17:06.857 --rc geninfo_unexecuted_blocks=1 00:17:06.857 00:17:06.857 ' 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:17:06.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.857 --rc genhtml_branch_coverage=1 00:17:06.857 --rc genhtml_function_coverage=1 00:17:06.857 --rc genhtml_legend=1 00:17:06.857 --rc geninfo_all_blocks=1 00:17:06.857 --rc geninfo_unexecuted_blocks=1 00:17:06.857 00:17:06.857 ' 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:06.857 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:06.857 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:06.858 Cannot find device "nvmf_init_br" 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:06.858 Cannot find device "nvmf_init_br2" 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:06.858 Cannot find device "nvmf_tgt_br" 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:17:06.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:07.117 Cannot find device "nvmf_tgt_br2" 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:07.117 Cannot find device "nvmf_init_br" 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:07.117 Cannot find device "nvmf_init_br2" 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:07.117 Cannot find device "nvmf_tgt_br" 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:07.117 Cannot find device "nvmf_tgt_br2" 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:07.117 Cannot find device "nvmf_br" 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:07.117 Cannot find device "nvmf_init_if" 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:07.117 Cannot find device "nvmf_init_if2" 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:07.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:07.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:07.117 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:07.377 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:07.377 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:17:07.377 00:17:07.377 --- 10.0.0.3 ping statistics --- 00:17:07.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.377 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:07.377 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:07.377 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:17:07.377 00:17:07.377 --- 10.0.0.4 ping statistics --- 00:17:07.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.377 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:07.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:07.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:17:07.377 00:17:07.377 --- 10.0.0.1 ping statistics --- 00:17:07.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.377 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:07.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:07.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:17:07.377 00:17:07.377 --- 10.0.0.2 ping statistics --- 00:17:07.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.377 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78449 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78449 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # '[' -z 78449 ']' 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@843 -- # local max_retries=100 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@847 -- # xtrace_disable 00:17:07.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@871 -- # return 0 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@735 -- # xtrace_disable 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e7d8ecb16fbec42dfbac2e0ee93f422f 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.60S 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e7d8ecb16fbec42dfbac2e0ee93f422f 0 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e7d8ecb16fbec42dfbac2e0ee93f422f 0 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e7d8ecb16fbec42dfbac2e0ee93f422f 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:08.758 08:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:08.758 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.60S 00:17:08.758 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.60S 00:17:08.758 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.60S 00:17:08.758 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3c2c20f8a5e45eece3d614d0b55e0610b408b9c80dbbfb24b50d4169e2e3b355 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.baR 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3c2c20f8a5e45eece3d614d0b55e0610b408b9c80dbbfb24b50d4169e2e3b355 3 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3c2c20f8a5e45eece3d614d0b55e0610b408b9c80dbbfb24b50d4169e2e3b355 3 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3c2c20f8a5e45eece3d614d0b55e0610b408b9c80dbbfb24b50d4169e2e3b355 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.baR 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.baR 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.baR 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c071415446a367a8d550ec9b23386d3e38ef44549135addd 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.hVi 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c071415446a367a8d550ec9b23386d3e38ef44549135addd 0 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c071415446a367a8d550ec9b23386d3e38ef44549135addd 0 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c071415446a367a8d550ec9b23386d3e38ef44549135addd 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.hVi 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.hVi 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.hVi 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e011e81b553016b5c24568cf1e453dd2a0383a115844af50 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.v4f 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e011e81b553016b5c24568cf1e453dd2a0383a115844af50 2 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e011e81b553016b5c24568cf1e453dd2a0383a115844af50 2 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e011e81b553016b5c24568cf1e453dd2a0383a115844af50 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.v4f 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.v4f 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.v4f 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bf91175977bed80491b1b45e81782f4e 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.k4u 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bf91175977bed80491b1b45e81782f4e 1 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bf91175977bed80491b1b45e81782f4e 1 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bf91175977bed80491b1b45e81782f4e 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.k4u 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.k4u 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.k4u 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8997b19cfc4fe0ad3f01897f0d113fcb 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.fuM 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8997b19cfc4fe0ad3f01897f0d113fcb 1 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8997b19cfc4fe0ad3f01897f0d113fcb 1 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8997b19cfc4fe0ad3f01897f0d113fcb 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:17:08.759 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:09.019 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.fuM 00:17:09.019 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.fuM 00:17:09.019 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.fuM 00:17:09.019 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:09.019 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:09.019 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:09.019 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:09.019 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:17:09.019 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:09.019 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:09.019 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9c21539d7a228b4a28e77ddf1ea6eed79304eb8383c0cc6d 00:17:09.019 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:09.019 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.WiZ 00:17:09.019 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9c21539d7a228b4a28e77ddf1ea6eed79304eb8383c0cc6d 2 00:17:09.019 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9c21539d7a228b4a28e77ddf1ea6eed79304eb8383c0cc6d 2 00:17:09.019 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:09.019 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9c21539d7a228b4a28e77ddf1ea6eed79304eb8383c0cc6d 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.WiZ 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.WiZ 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.WiZ 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=65e011826cb0f5735df7b820cf22f7e3 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.IOx 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 65e011826cb0f5735df7b820cf22f7e3 0 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 65e011826cb0f5735df7b820cf22f7e3 0 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=65e011826cb0f5735df7b820cf22f7e3 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.IOx 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.IOx 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.IOx 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=114ca3d018285f73df3aabc4e6379a81d38b09f198140d1db73e60ca25f29abd 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.VEq 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 114ca3d018285f73df3aabc4e6379a81d38b09f198140d1db73e60ca25f29abd 3 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 114ca3d018285f73df3aabc4e6379a81d38b09f198140d1db73e60ca25f29abd 3 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=114ca3d018285f73df3aabc4e6379a81d38b09f198140d1db73e60ca25f29abd 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.VEq 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.VEq 00:17:09.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.VEq 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78449 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # '[' -z 78449 ']' 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@843 -- # local max_retries=100 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@847 -- # xtrace_disable 00:17:09.020 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@871 -- # return 0 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.60S 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.baR ]] 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.baR 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.hVi 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.v4f ]] 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.v4f 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.k4u 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.fuM ]] 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fuM 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:09.280 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.WiZ 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.IOx ]] 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.IOx 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.VEq 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:09.540 08:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:09.799 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:09.799 Waiting for block devices as requested 00:17:09.799 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:10.059 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:10.627 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:10.628 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:10.628 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:17:10.628 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1595 -- # local device=nvme0n1 00:17:10.628 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:10.628 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:17:10.628 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:17:10.628 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:10.628 08:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:10.628 No valid GPT data, bailing 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1595 -- # local device=nvme0n2 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:10.628 No valid GPT data, bailing 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1595 -- # local device=nvme0n3 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:10.628 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:10.628 No valid GPT data, bailing 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1595 -- # local device=nvme1n1 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:10.886 No valid GPT data, bailing 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:10.886 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -a 10.0.0.1 -t tcp -s 4420 00:17:10.886 00:17:10.886 Discovery Log Number of Records 2, Generation counter 2 00:17:10.886 =====Discovery Log Entry 0====== 00:17:10.886 trtype: tcp 00:17:10.886 adrfam: ipv4 00:17:10.886 subtype: current discovery subsystem 00:17:10.886 treq: not specified, sq flow control disable supported 00:17:10.886 portid: 1 00:17:10.886 trsvcid: 4420 00:17:10.886 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:10.886 traddr: 10.0.0.1 00:17:10.886 eflags: none 00:17:10.886 sectype: none 00:17:10.886 =====Discovery Log Entry 1====== 00:17:10.886 trtype: tcp 00:17:10.886 adrfam: ipv4 00:17:10.887 subtype: nvme subsystem 00:17:10.887 treq: not specified, sq flow control disable supported 00:17:10.887 portid: 1 00:17:10.887 trsvcid: 4420 00:17:10.887 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:10.887 traddr: 10.0.0.1 00:17:10.887 eflags: none 00:17:10.887 sectype: none 00:17:10.887 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:10.887 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:10.887 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:10.887 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:10.887 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.887 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:10.887 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:10.887 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:10.887 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:10.887 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:10.887 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:10.887 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: ]] 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.146 nvme0n1 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.146 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: ]] 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:11.147 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.406 nvme0n1 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: ]] 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:11.406 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.407 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:11.407 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:11.407 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.407 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:11.407 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.407 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.407 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.407 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.407 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.407 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.407 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.407 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.407 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.407 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.407 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.407 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.407 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:11.407 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.407 nvme0n1 00:17:11.407 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:11.407 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.407 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:11.407 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.407 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.671 08:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: ]] 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.671 nvme0n1 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: ]] 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:11.671 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.672 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:11.672 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.931 nvme0n1 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:11.931 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.190 nvme0n1 00:17:12.190 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:12.190 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.190 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:12.191 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.191 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.191 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:12.191 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.191 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.191 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:12.191 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.191 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:12.191 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.191 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.191 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:12.191 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.191 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:12.191 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:12.191 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:12.191 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:12.191 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:12.191 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:12.191 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: ]] 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:12.450 08:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.709 nvme0n1 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: ]] 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:12.709 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.020 nvme0n1 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: ]] 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:13.020 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:13.021 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.021 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:13.021 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.021 nvme0n1 00:17:13.021 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:13.021 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.021 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.021 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:13.021 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.021 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:13.021 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.021 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.021 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:13.021 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.021 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: ]] 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:13.316 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.317 nvme0n1 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.317 nvme0n1 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:13.317 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.577 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.577 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:13.577 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.577 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.577 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:13.577 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.577 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:13.577 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.577 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.577 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:13.577 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.577 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.577 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:13.577 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:13.577 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:13.577 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:13.577 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.577 08:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: ]] 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:14.144 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.404 nvme0n1 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: ]] 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:14.404 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.664 nvme0n1 00:17:14.664 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:14.664 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.664 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.664 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:14.664 08:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: ]] 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:14.664 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.923 nvme0n1 00:17:14.923 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:14.923 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.923 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.923 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:14.923 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.923 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:14.923 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.923 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.923 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:14.923 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.923 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:14.923 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.923 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:14.923 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.923 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:14.923 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:14.923 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:14.923 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:14.923 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:14.923 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:14.923 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: ]] 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:14.924 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.183 nvme0n1 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:15.183 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.443 nvme0n1 00:17:15.443 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:15.443 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.443 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:15.443 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.443 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.443 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:15.443 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.443 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.443 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:15.443 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.443 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:15.443 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.443 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.443 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:15.443 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.443 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.443 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:15.443 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:15.443 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:15.443 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:15.443 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.443 08:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: ]] 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:17.348 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.607 nvme0n1 00:17:17.607 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:17.607 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.607 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.607 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:17.607 08:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: ]] 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:17.607 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.177 nvme0n1 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: ]] 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.177 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:18.178 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:18.178 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.178 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:18.178 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.178 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.178 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.178 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.178 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.178 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.178 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.178 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.178 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.178 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.178 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.178 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.178 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:18.178 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.437 nvme0n1 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: ]] 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.437 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.438 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.438 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.438 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.438 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.438 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.438 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.438 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:18.438 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:18.438 08:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.006 nvme0n1 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:19.006 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.265 nvme0n1 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: ]] 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:19.265 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.266 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.266 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.266 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.266 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.266 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.266 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.266 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.266 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.266 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.266 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:19.266 08:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.250 nvme0n1 00:17:20.250 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:20.250 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.250 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.250 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:20.250 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: ]] 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:20.251 08:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.819 nvme0n1 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: ]] 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:20.819 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:20.820 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.820 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:20.820 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:20.820 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:20.820 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.820 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:20.820 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:20.820 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.820 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:20.820 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.820 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.820 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.820 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.820 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.820 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.820 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.820 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.820 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.820 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.820 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.820 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.820 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:20.820 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.388 nvme0n1 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: ]] 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:21.388 08:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.957 nvme0n1 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:21.957 08:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.895 nvme0n1 00:17:22.895 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:22.895 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.895 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.895 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:22.895 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.895 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:22.895 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.895 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.895 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:22.895 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: ]] 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.896 nvme0n1 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: ]] 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:22.896 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.156 nvme0n1 00:17:23.156 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: ]] 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.157 nvme0n1 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:23.157 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: ]] 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.417 nvme0n1 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:23.417 08:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.677 nvme0n1 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: ]] 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.677 nvme0n1 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:23.677 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: ]] 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.937 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:23.938 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:23.938 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:23.938 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.938 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.938 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:23.938 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.938 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:23.938 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:23.938 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:23.938 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.938 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:23.938 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.938 nvme0n1 00:17:23.938 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:23.938 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.938 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.938 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:23.938 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.938 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:23.938 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.938 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.938 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:23.938 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: ]] 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.198 nvme0n1 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: ]] 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:24.198 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.199 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:24.199 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.199 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.199 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.199 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.199 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.199 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.199 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.199 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.199 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.199 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.199 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.199 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:24.199 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:24.199 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.458 nvme0n1 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:24.458 08:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.717 nvme0n1 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: ]] 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.717 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.718 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:24.718 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.977 nvme0n1 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: ]] 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:24.977 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.236 nvme0n1 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: ]] 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.236 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.237 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.237 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.237 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.237 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.237 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.237 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.237 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:25.237 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.496 nvme0n1 00:17:25.496 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:25.496 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.496 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:25.496 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.496 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.496 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:25.496 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.496 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.496 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:25.496 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.496 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:25.496 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.496 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:25.496 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.496 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:25.496 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:25.496 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: ]] 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:25.497 08:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.756 nvme0n1 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:25.756 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.016 nvme0n1 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: ]] 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:26.016 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.585 nvme0n1 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: ]] 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:26.585 08:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.884 nvme0n1 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: ]] 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.884 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.885 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.885 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.885 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.885 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.885 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.885 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.885 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.885 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.885 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:26.885 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.452 nvme0n1 00:17:27.452 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:27.452 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.452 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:27.452 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.452 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.452 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:27.452 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.452 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.452 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:27.452 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.452 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:27.452 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.452 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:27.452 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: ]] 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:27.453 08:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.712 nvme0n1 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:27.712 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.971 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:27.971 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.971 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.971 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.971 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.971 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.971 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.971 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.971 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.971 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.971 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.971 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.971 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:27.971 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:27.971 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.231 nvme0n1 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: ]] 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:28.231 08:30:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.799 nvme0n1 00:17:28.799 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:28.799 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.799 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.799 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:28.799 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.799 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:28.799 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.799 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.799 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:28.799 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: ]] 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.058 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:29.059 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.626 nvme0n1 00:17:29.626 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:29.626 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.626 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:29.626 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.626 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.626 08:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:29.626 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.626 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.626 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:29.626 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.626 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:29.626 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.626 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:29.626 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.626 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:29.626 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:29.626 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:29.626 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:29.626 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:29.626 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:29.626 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:29.626 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:29.626 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: ]] 00:17:29.626 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:29.626 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:29.626 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.626 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:29.627 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:29.627 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:29.627 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.627 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:29.627 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:29.627 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.627 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:29.627 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.627 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:29.627 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:29.627 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:29.627 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.627 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.627 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:29.627 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.627 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:29.627 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:29.627 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:29.627 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.627 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:29.627 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.194 nvme0n1 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: ]] 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:30.194 08:30:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.131 nvme0n1 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:31.131 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.132 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:31.132 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:31.132 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:31.132 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.132 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.132 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:31.132 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.132 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:31.132 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:31.132 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:31.132 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:31.132 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:31.132 08:30:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.700 nvme0n1 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: ]] 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:31.700 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.701 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:31.701 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.701 nvme0n1 00:17:31.701 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:31.701 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.701 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.701 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:31.701 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: ]] 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.960 nvme0n1 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: ]] 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:31.960 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.219 nvme0n1 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: ]] 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.219 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:32.220 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:32.220 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:32.220 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.220 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:32.220 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:32.220 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.220 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:32.220 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.220 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:32.220 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:32.220 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:32.220 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.220 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.220 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:32.220 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.220 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:32.220 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:32.220 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:32.220 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:32.220 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:32.220 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.479 nvme0n1 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.479 nvme0n1 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:32.479 08:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.479 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: ]] 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.788 nvme0n1 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: ]] 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.788 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:32.789 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:32.789 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:32.789 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.789 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:32.789 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.048 nvme0n1 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: ]] 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:33.048 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.307 nvme0n1 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: ]] 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.307 nvme0n1 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.307 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.308 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:33.308 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.308 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:33.308 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.308 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.308 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:33.308 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:33.574 08:30:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.574 nvme0n1 00:17:33.574 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: ]] 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:33.575 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.850 nvme0n1 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: ]] 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:33.851 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.110 nvme0n1 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: ]] 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:34.110 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.367 nvme0n1 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: ]] 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.367 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:34.368 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:34.368 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:34.368 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:34.368 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:34.368 08:30:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.625 nvme0n1 00:17:34.625 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:34.625 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.625 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.625 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:34.625 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.625 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:34.625 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.625 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.625 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:34.625 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.883 nvme0n1 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.883 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: ]] 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:35.142 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.400 nvme0n1 00:17:35.400 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:35.400 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.400 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.400 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:35.400 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.400 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:35.400 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.400 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.400 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:35.400 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.400 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:35.400 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.400 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:35.400 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.400 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:35.400 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:35.400 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:35.400 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: ]] 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:35.401 08:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.968 nvme0n1 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: ]] 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:35.968 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.228 nvme0n1 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: ]] 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:36.228 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:36.229 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.229 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:36.229 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.229 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:36.229 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:36.488 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:36.488 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.488 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.488 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:36.488 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.488 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:36.488 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:36.488 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:36.488 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:36.488 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:36.488 08:30:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.747 nvme0n1 00:17:36.747 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:36.747 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.747 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:36.747 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.747 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.747 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:36.747 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.747 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.747 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:36.747 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.747 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:36.747 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.747 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:36.747 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.747 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:36.747 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:36.747 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:36.747 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:36.747 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:36.747 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:36.748 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.316 nvme0n1 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdkOGVjYjE2ZmJlYzQyZGZiYWMyZTBlZTkzZjQyMmZsY2LV: 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: ]] 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2MyYzIwZjhhNWU0NWVlY2UzZDYxNGQwYjU1ZTA2MTBiNDA4YjljODBkYmJmYjI0YjUwZDQxNjllMmUzYjM1NSOHiE8=: 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:37.316 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:37.317 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.317 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:37.317 08:30:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.884 nvme0n1 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: ]] 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:37.884 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:37.885 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:37.885 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.885 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:37.885 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:37.885 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.885 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:37.885 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.885 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:37.885 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:37.885 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:37.885 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.885 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.885 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:37.885 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.885 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:37.885 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:37.885 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:37.885 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.885 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:37.885 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.450 nvme0n1 00:17:38.450 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:38.450 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.450 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.450 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:38.450 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.450 08:30:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: ]] 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:38.708 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.274 nvme0n1 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWMyMTUzOWQ3YTIyOGI0YTI4ZTc3ZGRmMWVhNmVlZDc5MzA0ZWI4MzgzYzBjYzZku9/Fjg==: 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: ]] 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjVlMDExODI2Y2IwZjU3MzVkZjdiODIwY2YyMmY3ZTND0+8q: 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:39.274 08:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.841 nvme0n1 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE0Y2EzZDAxODI4NWY3M2RmM2FhYmM0ZTYzNzlhODFkMzhiMDlmMTk4MTQwZDFkYjczZTYwY2EyNWYyOWFiZB+vpSA=: 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:39.841 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.100 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:40.100 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.100 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:40.100 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:40.100 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:40.100 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.100 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.100 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:40.100 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.100 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:40.100 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:40.100 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:40.100 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:40.100 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:40.100 08:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.666 nvme0n1 00:17:40.666 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:40.666 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.666 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.666 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:40.666 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.666 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:40.666 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.666 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.666 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:40.666 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.666 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:40.666 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:40.666 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.666 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:40.666 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:40.666 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: ]] 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # local es=0 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@657 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@643 -- # local arg=rpc_cmd 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@647 -- # type -t rpc_cmd 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@658 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.667 request: 00:17:40.667 { 00:17:40.667 "name": "nvme0", 00:17:40.667 "trtype": "tcp", 00:17:40.667 "traddr": "10.0.0.1", 00:17:40.667 "adrfam": "ipv4", 00:17:40.667 "trsvcid": "4420", 00:17:40.667 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:40.667 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:40.667 "prchk_reftag": false, 00:17:40.667 "prchk_guard": false, 00:17:40.667 "hdgst": false, 00:17:40.667 "ddgst": false, 00:17:40.667 "allow_unrecognized_csi": false, 00:17:40.667 "method": "bdev_nvme_attach_controller", 00:17:40.667 "req_id": 1 00:17:40.667 } 00:17:40.667 Got JSON-RPC error response 00:17:40.667 response: 00:17:40.667 { 00:17:40.667 "code": -5, 00:17:40.667 "message": "Input/output error" 00:17:40.667 } 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 1 == 0 ]] 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@658 -- # es=1 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # local es=0 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@657 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@643 -- # local arg=rpc_cmd 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@647 -- # type -t rpc_cmd 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@658 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:40.667 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.926 request: 00:17:40.926 { 00:17:40.926 "name": "nvme0", 00:17:40.926 "trtype": "tcp", 00:17:40.926 "traddr": "10.0.0.1", 00:17:40.926 "adrfam": "ipv4", 00:17:40.926 "trsvcid": "4420", 00:17:40.926 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:40.926 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:40.926 "prchk_reftag": false, 00:17:40.926 "prchk_guard": false, 00:17:40.926 "hdgst": false, 00:17:40.926 "ddgst": false, 00:17:40.926 "dhchap_key": "key2", 00:17:40.926 "allow_unrecognized_csi": false, 00:17:40.926 "method": "bdev_nvme_attach_controller", 00:17:40.926 "req_id": 1 00:17:40.926 } 00:17:40.926 Got JSON-RPC error response 00:17:40.926 response: 00:17:40.926 { 00:17:40.926 "code": -5, 00:17:40.926 "message": "Input/output error" 00:17:40.926 } 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 1 == 0 ]] 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@658 -- # es=1 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # local es=0 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@657 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@643 -- # local arg=rpc_cmd 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@647 -- # type -t rpc_cmd 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@658 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.926 request: 00:17:40.926 { 00:17:40.926 "name": "nvme0", 00:17:40.926 "trtype": "tcp", 00:17:40.926 "traddr": "10.0.0.1", 00:17:40.926 "adrfam": "ipv4", 00:17:40.926 "trsvcid": "4420", 00:17:40.926 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:40.926 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:40.926 "prchk_reftag": false, 00:17:40.926 "prchk_guard": false, 00:17:40.926 "hdgst": false, 00:17:40.926 "ddgst": false, 00:17:40.926 "dhchap_key": "key1", 00:17:40.926 "dhchap_ctrlr_key": "ckey2", 00:17:40.926 "allow_unrecognized_csi": false, 00:17:40.926 "method": "bdev_nvme_attach_controller", 00:17:40.926 "req_id": 1 00:17:40.926 } 00:17:40.926 Got JSON-RPC error response 00:17:40.926 response: 00:17:40.926 { 00:17:40.926 "code": -5, 00:17:40.926 "message": "Input/output error" 00:17:40.926 } 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 1 == 0 ]] 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@658 -- # es=1 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.926 nvme0n1 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: ]] 00:17:40.926 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:40.927 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.927 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:40.927 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.927 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:40.927 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.927 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:40.927 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.927 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # local es=0 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@657 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@643 -- # local arg=rpc_cmd 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@647 -- # type -t rpc_cmd 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@658 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.186 request: 00:17:41.186 { 00:17:41.186 "name": "nvme0", 00:17:41.186 "dhchap_key": "key1", 00:17:41.186 "dhchap_ctrlr_key": "ckey2", 00:17:41.186 "method": "bdev_nvme_set_keys", 00:17:41.186 "req_id": 1 00:17:41.186 } 00:17:41.186 Got JSON-RPC error response 00:17:41.186 response: 00:17:41.186 { 00:17:41.186 "code": -5, 00:17:41.186 "message": "Input/output error" 00:17:41.186 } 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 1 == 0 ]] 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@658 -- # es=1 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:17:41.186 08:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:17:42.129 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:42.129 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.129 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:42.129 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.129 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:42.129 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:17:42.129 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:42.129 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.129 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:42.129 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:42.129 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:42.129 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:42.129 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:42.129 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:42.129 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:42.129 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA3MTQxNTQ0NmEzNjdhOGQ1NTBlYzliMjMzODZkM2UzOGVmNDQ1NDkxMzVhZGRkiXy7yQ==: 00:17:42.130 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: ]] 00:17:42.130 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTAxMWU4MWI1NTMwMTZiNWMyNDU2OGNmMWU0NTNkZDJhMDM4M2ExMTU4NDRhZjUwxmFPKw==: 00:17:42.130 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:17:42.130 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:42.130 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:42.130 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:42.130 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.130 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.130 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:42.130 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.130 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:42.130 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:42.130 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:42.130 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:42.130 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:42.130 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.388 nvme0n1 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmY5MTE3NTk3N2JlZDgwNDkxYjFiNDVlODE3ODJmNGVcxuU6: 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: ]] 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2IxOWNmYzRmZTBhZDNmMDE4OTdmMGQxMTNmY2KH6soI: 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # local es=0 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@657 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@643 -- # local arg=rpc_cmd 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@647 -- # type -t rpc_cmd 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@658 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.388 request: 00:17:42.388 { 00:17:42.388 "name": "nvme0", 00:17:42.388 "dhchap_key": "key2", 00:17:42.388 "dhchap_ctrlr_key": "ckey1", 00:17:42.388 "method": "bdev_nvme_set_keys", 00:17:42.388 "req_id": 1 00:17:42.388 } 00:17:42.388 Got JSON-RPC error response 00:17:42.388 response: 00:17:42.388 { 00:17:42.388 "code": -13, 00:17:42.388 "message": "Permission denied" 00:17:42.388 } 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 1 == 0 ]] 00:17:42.388 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@658 -- # es=1 00:17:42.389 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:17:42.389 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:17:42.389 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:17:42.389 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:42.389 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.389 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:42.389 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.389 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:42.389 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:17:42.389 08:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:17:43.324 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.324 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:43.324 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:43.324 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.324 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:43.582 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:17:43.582 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:17:43.582 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:17:43.582 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:43.582 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:43.582 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:17:43.582 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:43.583 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:17:43.583 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:43.583 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:43.583 rmmod nvme_tcp 00:17:43.583 rmmod nvme_fabrics 00:17:43.583 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:43.583 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:17:43.583 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:17:43.583 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78449 ']' 00:17:43.583 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78449 00:17:43.583 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' -z 78449 ']' 00:17:43.583 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@961 -- # kill -0 78449 00:17:43.583 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # uname 00:17:43.583 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:17:43.583 08:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 78449 00:17:43.583 killing process with pid 78449 00:17:43.583 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:17:43.583 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:17:43.583 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@975 -- # echo 'killing process with pid 78449' 00:17:43.583 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # kill 78449 00:17:43.583 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@981 -- # wait 78449 00:17:43.841 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:43.841 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:43.841 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:43.841 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:17:43.841 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:43.841 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:17:43.841 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:17:43.841 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:43.841 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:43.841 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:43.841 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:43.841 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:43.841 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:43.841 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:43.841 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:43.841 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:43.841 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:43.841 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:43.841 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:43.841 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:43.841 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:44.099 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:44.099 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:44.099 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.099 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.100 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.100 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:17:44.100 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:44.100 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:44.100 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:44.100 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:44.100 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:17:44.100 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:44.100 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:44.100 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:44.100 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:44.100 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:17:44.100 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:17:44.100 08:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:44.669 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:44.928 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:44.928 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:44.928 08:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.60S /tmp/spdk.key-null.hVi /tmp/spdk.key-sha256.k4u /tmp/spdk.key-sha384.WiZ /tmp/spdk.key-sha512.VEq /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:44.928 08:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:45.496 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:45.496 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:45.496 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:45.496 00:17:45.496 real 0m38.757s 00:17:45.496 user 0m35.247s 00:17:45.496 sys 0m3.927s 00:17:45.496 08:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1133 -- # xtrace_disable 00:17:45.496 ************************************ 00:17:45.496 END TEST nvmf_auth_host 00:17:45.496 ************************************ 00:17:45.496 08:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.496 08:30:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:17:45.496 08:30:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:45.496 08:30:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:17:45.496 08:30:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1114 -- # xtrace_disable 00:17:45.496 08:30:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.496 ************************************ 00:17:45.496 START TEST nvmf_digest 00:17:45.496 ************************************ 00:17:45.496 08:30:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:45.496 * Looking for test storage... 00:17:45.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:45.496 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:17:45.496 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1638 -- # lcov --version 00:17:45.496 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:17:45.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.757 --rc genhtml_branch_coverage=1 00:17:45.757 --rc genhtml_function_coverage=1 00:17:45.757 --rc genhtml_legend=1 00:17:45.757 --rc geninfo_all_blocks=1 00:17:45.757 --rc geninfo_unexecuted_blocks=1 00:17:45.757 00:17:45.757 ' 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:17:45.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.757 --rc genhtml_branch_coverage=1 00:17:45.757 --rc genhtml_function_coverage=1 00:17:45.757 --rc genhtml_legend=1 00:17:45.757 --rc geninfo_all_blocks=1 00:17:45.757 --rc geninfo_unexecuted_blocks=1 00:17:45.757 00:17:45.757 ' 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:17:45.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.757 --rc genhtml_branch_coverage=1 00:17:45.757 --rc genhtml_function_coverage=1 00:17:45.757 --rc genhtml_legend=1 00:17:45.757 --rc geninfo_all_blocks=1 00:17:45.757 --rc geninfo_unexecuted_blocks=1 00:17:45.757 00:17:45.757 ' 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:17:45.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.757 --rc genhtml_branch_coverage=1 00:17:45.757 --rc genhtml_function_coverage=1 00:17:45.757 --rc genhtml_legend=1 00:17:45.757 --rc geninfo_all_blocks=1 00:17:45.757 --rc geninfo_unexecuted_blocks=1 00:17:45.757 00:17:45.757 ' 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.757 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:45.758 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:45.758 Cannot find device "nvmf_init_br" 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:45.758 Cannot find device "nvmf_init_br2" 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:45.758 Cannot find device "nvmf_tgt_br" 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:45.758 Cannot find device "nvmf_tgt_br2" 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:45.758 Cannot find device "nvmf_init_br" 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:45.758 Cannot find device "nvmf_init_br2" 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:45.758 Cannot find device "nvmf_tgt_br" 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:45.758 Cannot find device "nvmf_tgt_br2" 00:17:45.758 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:17:45.759 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:45.759 Cannot find device "nvmf_br" 00:17:45.759 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:17:45.759 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:45.759 Cannot find device "nvmf_init_if" 00:17:45.759 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:17:45.759 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:45.759 Cannot find device "nvmf_init_if2" 00:17:45.759 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:17:45.759 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:45.759 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:45.759 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:17:45.759 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:45.759 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:45.759 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:17:45.759 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:45.759 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:46.028 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:46.029 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:46.029 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:46.029 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:46.029 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:17:46.029 00:17:46.029 --- 10.0.0.3 ping statistics --- 00:17:46.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.029 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:46.029 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:46.029 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:46.029 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:17:46.029 00:17:46.029 --- 10.0.0.4 ping statistics --- 00:17:46.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.029 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:46.029 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:46.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:17:46.029 00:17:46.029 --- 10.0.0.1 ping statistics --- 00:17:46.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.029 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:17:46.029 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:46.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:17:46.029 00:17:46.029 --- 10.0.0.2 ping statistics --- 00:17:46.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.029 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:46.029 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.029 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:17:46.029 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:46.029 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.029 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:46.029 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:46.029 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.029 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:46.029 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1114 -- # xtrace_disable 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:46.296 ************************************ 00:17:46.296 START TEST nvmf_digest_clean 00:17:46.296 ************************************ 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1132 -- # run_digest 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:46.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=80115 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 80115 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # '[' -z 80115 ']' 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@843 -- # local max_retries=100 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@847 -- # xtrace_disable 00:17:46.296 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:46.296 [2024-11-20 08:30:33.685764] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:17:46.296 [2024-11-20 08:30:33.685918] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.572 [2024-11-20 08:30:33.851222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.572 [2024-11-20 08:30:33.920350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.572 [2024-11-20 08:30:33.920591] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.572 [2024-11-20 08:30:33.920674] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.572 [2024-11-20 08:30:33.920739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.572 [2024-11-20 08:30:33.920804] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.572 [2024-11-20 08:30:33.921359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.572 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:17:46.572 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@871 -- # return 0 00:17:46.572 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:46.572 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@735 -- # xtrace_disable 00:17:46.572 08:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:46.572 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.572 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:46.572 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:46.572 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:46.572 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@566 -- # xtrace_disable 00:17:46.572 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:46.572 [2024-11-20 08:30:34.070884] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:46.572 null0 00:17:46.831 [2024-11-20 08:30:34.132136] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.831 [2024-11-20 08:30:34.156283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:46.831 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:17:46.831 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:46.831 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:46.831 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:46.831 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:46.831 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:46.831 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:46.831 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:46.831 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80141 00:17:46.831 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:46.831 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80141 /var/tmp/bperf.sock 00:17:46.831 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # '[' -z 80141 ']' 00:17:46.831 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:46.831 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@843 -- # local max_retries=100 00:17:46.831 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:46.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:46.831 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@847 -- # xtrace_disable 00:17:46.831 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:46.831 [2024-11-20 08:30:34.231478] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:17:46.831 [2024-11-20 08:30:34.231825] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80141 ] 00:17:46.831 [2024-11-20 08:30:34.387204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.090 [2024-11-20 08:30:34.452789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.090 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:17:47.090 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@871 -- # return 0 00:17:47.090 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:47.090 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:47.090 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:47.350 [2024-11-20 08:30:34.832395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:47.350 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:47.350 08:30:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:47.917 nvme0n1 00:17:47.917 08:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:47.917 08:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:47.917 Running I/O for 2 seconds... 00:17:49.827 14732.00 IOPS, 57.55 MiB/s [2024-11-20T08:30:37.388Z] 14859.00 IOPS, 58.04 MiB/s 00:17:49.827 Latency(us) 00:17:49.827 [2024-11-20T08:30:37.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.827 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:49.827 nvme0n1 : 2.01 14852.84 58.02 0.00 0.00 8610.56 7477.06 17992.61 00:17:49.827 [2024-11-20T08:30:37.388Z] =================================================================================================================== 00:17:49.827 [2024-11-20T08:30:37.388Z] Total : 14852.84 58.02 0.00 0.00 8610.56 7477.06 17992.61 00:17:49.827 { 00:17:49.827 "results": [ 00:17:49.827 { 00:17:49.827 "job": "nvme0n1", 00:17:49.827 "core_mask": "0x2", 00:17:49.827 "workload": "randread", 00:17:49.827 "status": "finished", 00:17:49.827 "queue_depth": 128, 00:17:49.827 "io_size": 4096, 00:17:49.827 "runtime": 2.009448, 00:17:49.827 "iops": 14852.835206484568, 00:17:49.827 "mibps": 58.01888752533034, 00:17:49.827 "io_failed": 0, 00:17:49.827 "io_timeout": 0, 00:17:49.827 "avg_latency_us": 8610.555391616357, 00:17:49.827 "min_latency_us": 7477.061818181818, 00:17:49.827 "max_latency_us": 17992.61090909091 00:17:49.827 } 00:17:49.827 ], 00:17:49.827 "core_count": 1 00:17:49.827 } 00:17:49.827 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:49.827 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:49.827 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:49.827 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:49.827 | select(.opcode=="crc32c") 00:17:49.827 | "\(.module_name) \(.executed)"' 00:17:49.827 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80141 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' -z 80141 ']' 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill -0 80141 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # uname 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 80141 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:17:50.458 killing process with pid 80141 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@975 -- # echo 'killing process with pid 80141' 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # kill 80141 00:17:50.458 Received shutdown signal, test time was about 2.000000 seconds 00:17:50.458 00:17:50.458 Latency(us) 00:17:50.458 [2024-11-20T08:30:38.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.458 [2024-11-20T08:30:38.019Z] =================================================================================================================== 00:17:50.458 [2024-11-20T08:30:38.019Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@981 -- # wait 80141 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80188 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80188 /var/tmp/bperf.sock 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # '[' -z 80188 ']' 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@843 -- # local max_retries=100 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:50.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@847 -- # xtrace_disable 00:17:50.458 08:30:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:50.458 [2024-11-20 08:30:37.996477] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:17:50.458 [2024-11-20 08:30:37.996772] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:17:50.458 Zero copy mechanism will not be used. 00:17:50.458 llocations --file-prefix=spdk_pid80188 ] 00:17:50.717 [2024-11-20 08:30:38.145485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.717 [2024-11-20 08:30:38.191406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.717 08:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:17:50.717 08:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@871 -- # return 0 00:17:50.717 08:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:50.717 08:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:50.718 08:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:51.286 [2024-11-20 08:30:38.577777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:51.286 08:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:51.286 08:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:51.546 nvme0n1 00:17:51.546 08:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:51.546 08:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:51.546 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:51.546 Zero copy mechanism will not be used. 00:17:51.546 Running I/O for 2 seconds... 00:17:53.876 7712.00 IOPS, 964.00 MiB/s [2024-11-20T08:30:41.437Z] 7760.00 IOPS, 970.00 MiB/s 00:17:53.876 Latency(us) 00:17:53.876 [2024-11-20T08:30:41.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.876 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:53.876 nvme0n1 : 2.00 7756.64 969.58 0.00 0.00 2059.32 1750.11 7536.64 00:17:53.876 [2024-11-20T08:30:41.437Z] =================================================================================================================== 00:17:53.876 [2024-11-20T08:30:41.437Z] Total : 7756.64 969.58 0.00 0.00 2059.32 1750.11 7536.64 00:17:53.876 { 00:17:53.876 "results": [ 00:17:53.876 { 00:17:53.876 "job": "nvme0n1", 00:17:53.876 "core_mask": "0x2", 00:17:53.876 "workload": "randread", 00:17:53.876 "status": "finished", 00:17:53.876 "queue_depth": 16, 00:17:53.876 "io_size": 131072, 00:17:53.876 "runtime": 2.002929, 00:17:53.876 "iops": 7756.640400134003, 00:17:53.876 "mibps": 969.5800500167504, 00:17:53.876 "io_failed": 0, 00:17:53.876 "io_timeout": 0, 00:17:53.876 "avg_latency_us": 2059.3204681209622, 00:17:53.876 "min_latency_us": 1750.1090909090908, 00:17:53.876 "max_latency_us": 7536.64 00:17:53.876 } 00:17:53.876 ], 00:17:53.876 "core_count": 1 00:17:53.876 } 00:17:53.876 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:53.876 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:53.876 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:53.876 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:53.876 | select(.opcode=="crc32c") 00:17:53.876 | "\(.module_name) \(.executed)"' 00:17:53.876 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:53.876 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:53.876 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:53.876 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:53.876 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:53.876 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80188 00:17:53.876 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' -z 80188 ']' 00:17:53.876 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill -0 80188 00:17:53.876 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # uname 00:17:53.876 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:17:53.876 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 80188 00:17:54.136 killing process with pid 80188 00:17:54.136 Received shutdown signal, test time was about 2.000000 seconds 00:17:54.136 00:17:54.136 Latency(us) 00:17:54.136 [2024-11-20T08:30:41.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.136 [2024-11-20T08:30:41.697Z] =================================================================================================================== 00:17:54.136 [2024-11-20T08:30:41.697Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:54.136 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:17:54.136 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:17:54.136 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@975 -- # echo 'killing process with pid 80188' 00:17:54.136 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # kill 80188 00:17:54.136 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@981 -- # wait 80188 00:17:54.136 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:54.136 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:54.136 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:54.136 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:54.136 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:54.136 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:54.136 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:54.136 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80241 00:17:54.136 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:54.136 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80241 /var/tmp/bperf.sock 00:17:54.136 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # '[' -z 80241 ']' 00:17:54.136 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:54.136 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@843 -- # local max_retries=100 00:17:54.136 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:54.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:54.136 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@847 -- # xtrace_disable 00:17:54.136 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:54.396 [2024-11-20 08:30:41.697987] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:17:54.396 [2024-11-20 08:30:41.698091] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80241 ] 00:17:54.396 [2024-11-20 08:30:41.845560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.396 [2024-11-20 08:30:41.903789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.396 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:17:54.396 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@871 -- # return 0 00:17:54.396 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:54.396 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:54.396 08:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:54.964 [2024-11-20 08:30:42.229128] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:54.964 08:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:54.964 08:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:55.222 nvme0n1 00:17:55.222 08:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:55.222 08:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:55.222 Running I/O for 2 seconds... 00:17:57.535 16638.00 IOPS, 64.99 MiB/s [2024-11-20T08:30:45.096Z] 16764.50 IOPS, 65.49 MiB/s 00:17:57.535 Latency(us) 00:17:57.535 [2024-11-20T08:30:45.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.535 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:57.535 nvme0n1 : 2.00 16794.54 65.60 0.00 0.00 7614.97 6791.91 15609.48 00:17:57.535 [2024-11-20T08:30:45.096Z] =================================================================================================================== 00:17:57.535 [2024-11-20T08:30:45.096Z] Total : 16794.54 65.60 0.00 0.00 7614.97 6791.91 15609.48 00:17:57.535 { 00:17:57.535 "results": [ 00:17:57.535 { 00:17:57.535 "job": "nvme0n1", 00:17:57.535 "core_mask": "0x2", 00:17:57.535 "workload": "randwrite", 00:17:57.535 "status": "finished", 00:17:57.535 "queue_depth": 128, 00:17:57.535 "io_size": 4096, 00:17:57.535 "runtime": 2.004044, 00:17:57.535 "iops": 16794.541437213953, 00:17:57.535 "mibps": 65.603677489117, 00:17:57.535 "io_failed": 0, 00:17:57.535 "io_timeout": 0, 00:17:57.535 "avg_latency_us": 7614.969014793626, 00:17:57.535 "min_latency_us": 6791.912727272727, 00:17:57.535 "max_latency_us": 15609.483636363637 00:17:57.535 } 00:17:57.535 ], 00:17:57.535 "core_count": 1 00:17:57.535 } 00:17:57.535 08:30:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:57.535 08:30:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:57.535 08:30:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:57.535 08:30:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:57.535 | select(.opcode=="crc32c") 00:17:57.535 | "\(.module_name) \(.executed)"' 00:17:57.536 08:30:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80241 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' -z 80241 ']' 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill -0 80241 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # uname 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 80241 00:17:57.794 killing process with pid 80241 00:17:57.794 Received shutdown signal, test time was about 2.000000 seconds 00:17:57.794 00:17:57.794 Latency(us) 00:17:57.794 [2024-11-20T08:30:45.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.794 [2024-11-20T08:30:45.355Z] =================================================================================================================== 00:17:57.794 [2024-11-20T08:30:45.355Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@975 -- # echo 'killing process with pid 80241' 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # kill 80241 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@981 -- # wait 80241 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80295 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80295 /var/tmp/bperf.sock 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:57.794 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # '[' -z 80295 ']' 00:17:57.795 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:57.795 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@843 -- # local max_retries=100 00:17:57.795 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:57.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:57.795 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@847 -- # xtrace_disable 00:17:57.795 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:58.053 [2024-11-20 08:30:45.373447] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:17:58.053 [2024-11-20 08:30:45.373678] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:17:58.053 Zero copy mechanism will not be used. 00:17:58.053 llocations --file-prefix=spdk_pid80295 ] 00:17:58.053 [2024-11-20 08:30:45.516798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.053 [2024-11-20 08:30:45.570706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.312 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:17:58.312 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@871 -- # return 0 00:17:58.312 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:58.312 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:58.312 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:58.571 [2024-11-20 08:30:45.928210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:58.571 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:58.571 08:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:58.829 nvme0n1 00:17:58.829 08:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:58.829 08:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:59.088 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:59.088 Zero copy mechanism will not be used. 00:17:59.088 Running I/O for 2 seconds... 00:18:00.962 6528.00 IOPS, 816.00 MiB/s [2024-11-20T08:30:48.523Z] 6492.50 IOPS, 811.56 MiB/s 00:18:00.962 Latency(us) 00:18:00.962 [2024-11-20T08:30:48.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.962 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:00.962 nvme0n1 : 2.00 6489.22 811.15 0.00 0.00 2459.87 2085.24 6702.55 00:18:00.962 [2024-11-20T08:30:48.523Z] =================================================================================================================== 00:18:00.962 [2024-11-20T08:30:48.523Z] Total : 6489.22 811.15 0.00 0.00 2459.87 2085.24 6702.55 00:18:00.962 { 00:18:00.962 "results": [ 00:18:00.962 { 00:18:00.962 "job": "nvme0n1", 00:18:00.962 "core_mask": "0x2", 00:18:00.962 "workload": "randwrite", 00:18:00.962 "status": "finished", 00:18:00.962 "queue_depth": 16, 00:18:00.962 "io_size": 131072, 00:18:00.962 "runtime": 2.003168, 00:18:00.962 "iops": 6489.221073819071, 00:18:00.962 "mibps": 811.1526342273838, 00:18:00.962 "io_failed": 0, 00:18:00.962 "io_timeout": 0, 00:18:00.962 "avg_latency_us": 2459.873311653344, 00:18:00.962 "min_latency_us": 2085.2363636363634, 00:18:00.962 "max_latency_us": 6702.545454545455 00:18:00.962 } 00:18:00.962 ], 00:18:00.962 "core_count": 1 00:18:00.962 } 00:18:00.962 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:00.962 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:00.962 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:00.962 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:00.962 | select(.opcode=="crc32c") 00:18:00.962 | "\(.module_name) \(.executed)"' 00:18:00.962 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:01.221 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:01.221 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:01.221 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:01.221 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:01.221 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80295 00:18:01.221 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' -z 80295 ']' 00:18:01.221 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill -0 80295 00:18:01.221 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # uname 00:18:01.221 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:18:01.221 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 80295 00:18:01.221 killing process with pid 80295 00:18:01.221 Received shutdown signal, test time was about 2.000000 seconds 00:18:01.221 00:18:01.221 Latency(us) 00:18:01.221 [2024-11-20T08:30:48.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.221 [2024-11-20T08:30:48.783Z] =================================================================================================================== 00:18:01.222 [2024-11-20T08:30:48.783Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:01.222 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:18:01.222 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:18:01.222 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@975 -- # echo 'killing process with pid 80295' 00:18:01.222 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # kill 80295 00:18:01.222 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@981 -- # wait 80295 00:18:01.480 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80115 00:18:01.480 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' -z 80115 ']' 00:18:01.480 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill -0 80115 00:18:01.480 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # uname 00:18:01.480 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:18:01.480 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 80115 00:18:01.480 killing process with pid 80115 00:18:01.480 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:18:01.480 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:18:01.480 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@975 -- # echo 'killing process with pid 80115' 00:18:01.480 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # kill 80115 00:18:01.480 08:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@981 -- # wait 80115 00:18:01.739 00:18:01.739 real 0m15.525s 00:18:01.739 user 0m30.270s 00:18:01.739 sys 0m4.495s 00:18:01.739 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1133 -- # xtrace_disable 00:18:01.739 ************************************ 00:18:01.739 END TEST nvmf_digest_clean 00:18:01.739 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:01.739 ************************************ 00:18:01.739 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:18:01.739 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:18:01.739 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1114 -- # xtrace_disable 00:18:01.739 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:01.739 ************************************ 00:18:01.739 START TEST nvmf_digest_error 00:18:01.739 ************************************ 00:18:01.739 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1132 -- # run_digest_error 00:18:01.739 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:18:01.739 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:01.739 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:01.739 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:01.739 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80369 00:18:01.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.739 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80369 00:18:01.739 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # '[' -z 80369 ']' 00:18:01.739 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:01.739 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.739 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@843 -- # local max_retries=100 00:18:01.739 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.739 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@847 -- # xtrace_disable 00:18:01.739 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:01.739 [2024-11-20 08:30:49.266499] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:18:01.739 [2024-11-20 08:30:49.266592] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.999 [2024-11-20 08:30:49.415370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.999 [2024-11-20 08:30:49.476444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.999 [2024-11-20 08:30:49.476760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.999 [2024-11-20 08:30:49.476781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.999 [2024-11-20 08:30:49.476790] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.999 [2024-11-20 08:30:49.476797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.999 [2024-11-20 08:30:49.477214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.999 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:18:01.999 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@871 -- # return 0 00:18:01.999 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:01.999 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@735 -- # xtrace_disable 00:18:01.999 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:01.999 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.999 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:18:01.999 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@566 -- # xtrace_disable 00:18:01.999 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:01.999 [2024-11-20 08:30:49.545653] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:18:01.999 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:18:01.999 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:18:01.999 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:18:01.999 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@566 -- # xtrace_disable 00:18:01.999 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:02.259 [2024-11-20 08:30:49.609629] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:02.259 null0 00:18:02.259 [2024-11-20 08:30:49.661426] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.259 [2024-11-20 08:30:49.685613] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:02.259 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:18:02.259 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:18:02.259 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:02.259 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:02.259 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:02.259 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:02.259 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80395 00:18:02.259 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:18:02.259 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80395 /var/tmp/bperf.sock 00:18:02.259 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # '[' -z 80395 ']' 00:18:02.259 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:02.259 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@843 -- # local max_retries=100 00:18:02.259 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:02.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:02.259 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@847 -- # xtrace_disable 00:18:02.259 08:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:02.259 [2024-11-20 08:30:49.750897] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:18:02.259 [2024-11-20 08:30:49.751218] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80395 ] 00:18:02.518 [2024-11-20 08:30:49.900435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.518 [2024-11-20 08:30:49.954202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.518 [2024-11-20 08:30:50.010119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:02.518 08:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:18:02.518 08:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@871 -- # return 0 00:18:02.518 08:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:02.518 08:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:03.099 08:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:03.100 08:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@566 -- # xtrace_disable 00:18:03.100 08:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:03.100 08:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:18:03.100 08:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:03.100 08:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:03.366 nvme0n1 00:18:03.366 08:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:03.366 08:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@566 -- # xtrace_disable 00:18:03.366 08:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:03.366 08:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:18:03.366 08:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:03.366 08:30:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:03.366 Running I/O for 2 seconds... 00:18:03.366 [2024-11-20 08:30:50.906633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.366 [2024-11-20 08:30:50.906687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.366 [2024-11-20 08:30:50.906702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.366 [2024-11-20 08:30:50.923143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.366 [2024-11-20 08:30:50.923180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.366 [2024-11-20 08:30:50.923210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.625 [2024-11-20 08:30:50.940805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.625 [2024-11-20 08:30:50.941122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.625 [2024-11-20 08:30:50.941157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.625 [2024-11-20 08:30:50.958056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.625 [2024-11-20 08:30:50.958093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.625 [2024-11-20 08:30:50.958122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.625 [2024-11-20 08:30:50.974566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.625 [2024-11-20 08:30:50.974605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.625 [2024-11-20 08:30:50.974635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.625 [2024-11-20 08:30:50.991765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.625 [2024-11-20 08:30:50.991819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.625 [2024-11-20 08:30:50.991835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.625 [2024-11-20 08:30:51.007651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.625 [2024-11-20 08:30:51.007689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.625 [2024-11-20 08:30:51.007718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.625 [2024-11-20 08:30:51.024087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.625 [2024-11-20 08:30:51.024123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.626 [2024-11-20 08:30:51.024136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.626 [2024-11-20 08:30:51.039954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.626 [2024-11-20 08:30:51.039990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.626 [2024-11-20 08:30:51.040003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.626 [2024-11-20 08:30:51.056938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.626 [2024-11-20 08:30:51.056973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.626 [2024-11-20 08:30:51.057018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.626 [2024-11-20 08:30:51.072638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.626 [2024-11-20 08:30:51.072875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.626 [2024-11-20 08:30:51.072909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.626 [2024-11-20 08:30:51.089111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.626 [2024-11-20 08:30:51.089149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.626 [2024-11-20 08:30:51.089195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.626 [2024-11-20 08:30:51.106790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.626 [2024-11-20 08:30:51.106840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.626 [2024-11-20 08:30:51.106856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.626 [2024-11-20 08:30:51.124449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.626 [2024-11-20 08:30:51.124635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.626 [2024-11-20 08:30:51.124653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.626 [2024-11-20 08:30:51.142108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.626 [2024-11-20 08:30:51.142144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.626 [2024-11-20 08:30:51.142173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.626 [2024-11-20 08:30:51.159351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.626 [2024-11-20 08:30:51.159389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.626 [2024-11-20 08:30:51.159402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.626 [2024-11-20 08:30:51.176732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.626 [2024-11-20 08:30:51.176964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.626 [2024-11-20 08:30:51.176997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.885 [2024-11-20 08:30:51.194372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.885 [2024-11-20 08:30:51.194409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.885 [2024-11-20 08:30:51.194439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.885 [2024-11-20 08:30:51.211599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.885 [2024-11-20 08:30:51.211639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.885 [2024-11-20 08:30:51.211654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.885 [2024-11-20 08:30:51.228096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.885 [2024-11-20 08:30:51.228288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.885 [2024-11-20 08:30:51.228322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.885 [2024-11-20 08:30:51.244175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.885 [2024-11-20 08:30:51.244215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.885 [2024-11-20 08:30:51.244229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.885 [2024-11-20 08:30:51.260780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.885 [2024-11-20 08:30:51.260862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.885 [2024-11-20 08:30:51.260876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.885 [2024-11-20 08:30:51.278472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.885 [2024-11-20 08:30:51.278510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.885 [2024-11-20 08:30:51.278539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.885 [2024-11-20 08:30:51.296488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.885 [2024-11-20 08:30:51.296667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.885 [2024-11-20 08:30:51.296684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.885 [2024-11-20 08:30:51.313373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.885 [2024-11-20 08:30:51.313410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.885 [2024-11-20 08:30:51.313439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.886 [2024-11-20 08:30:51.330419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.886 [2024-11-20 08:30:51.330594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.886 [2024-11-20 08:30:51.330627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.886 [2024-11-20 08:30:51.346275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.886 [2024-11-20 08:30:51.346338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.886 [2024-11-20 08:30:51.346351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.886 [2024-11-20 08:30:51.361784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.886 [2024-11-20 08:30:51.361864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.886 [2024-11-20 08:30:51.361893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.886 [2024-11-20 08:30:51.377930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.886 [2024-11-20 08:30:51.377965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.886 [2024-11-20 08:30:51.377995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.886 [2024-11-20 08:30:51.393431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.886 [2024-11-20 08:30:51.393466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.886 [2024-11-20 08:30:51.393495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.886 [2024-11-20 08:30:51.408986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.886 [2024-11-20 08:30:51.409021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.886 [2024-11-20 08:30:51.409050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.886 [2024-11-20 08:30:51.424478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.886 [2024-11-20 08:30:51.424645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.886 [2024-11-20 08:30:51.424664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.886 [2024-11-20 08:30:51.441530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:03.886 [2024-11-20 08:30:51.441739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.886 [2024-11-20 08:30:51.441757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.145 [2024-11-20 08:30:51.457503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.145 [2024-11-20 08:30:51.457541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.145 [2024-11-20 08:30:51.457554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.145 [2024-11-20 08:30:51.472676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.145 [2024-11-20 08:30:51.472886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.145 [2024-11-20 08:30:51.472919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.145 [2024-11-20 08:30:51.488447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.145 [2024-11-20 08:30:51.488488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.145 [2024-11-20 08:30:51.488502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.145 [2024-11-20 08:30:51.505095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.145 [2024-11-20 08:30:51.505270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.145 [2024-11-20 08:30:51.505303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.146 [2024-11-20 08:30:51.520599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.146 [2024-11-20 08:30:51.520791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.146 [2024-11-20 08:30:51.520866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.146 [2024-11-20 08:30:51.536085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.146 [2024-11-20 08:30:51.536294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.146 [2024-11-20 08:30:51.536433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.146 [2024-11-20 08:30:51.552773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.146 [2024-11-20 08:30:51.553028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.146 [2024-11-20 08:30:51.553229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.146 [2024-11-20 08:30:51.571134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.146 [2024-11-20 08:30:51.571324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.146 [2024-11-20 08:30:51.571456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.146 [2024-11-20 08:30:51.589188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.146 [2024-11-20 08:30:51.589388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.146 [2024-11-20 08:30:51.589528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.146 [2024-11-20 08:30:51.606908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.146 [2024-11-20 08:30:51.607134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.146 [2024-11-20 08:30:51.607289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.146 [2024-11-20 08:30:51.624783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.146 [2024-11-20 08:30:51.625013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.146 [2024-11-20 08:30:51.625133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.146 [2024-11-20 08:30:51.642630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.146 [2024-11-20 08:30:51.642860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.146 [2024-11-20 08:30:51.643039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.146 [2024-11-20 08:30:51.660222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.146 [2024-11-20 08:30:51.660396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.146 [2024-11-20 08:30:51.660429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.146 [2024-11-20 08:30:51.676524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.146 [2024-11-20 08:30:51.676698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.146 [2024-11-20 08:30:51.676731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.146 [2024-11-20 08:30:51.694045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.146 [2024-11-20 08:30:51.694199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.146 [2024-11-20 08:30:51.694217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.405 [2024-11-20 08:30:51.710406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.405 [2024-11-20 08:30:51.710444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.405 [2024-11-20 08:30:51.710457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.405 [2024-11-20 08:30:51.726789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.405 [2024-11-20 08:30:51.726850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.405 [2024-11-20 08:30:51.726888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.405 [2024-11-20 08:30:51.742925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.405 [2024-11-20 08:30:51.742960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.405 [2024-11-20 08:30:51.742989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.405 [2024-11-20 08:30:51.759678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.405 [2024-11-20 08:30:51.759717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.405 [2024-11-20 08:30:51.759731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.405 [2024-11-20 08:30:51.776114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.406 [2024-11-20 08:30:51.776294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.406 [2024-11-20 08:30:51.776328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.406 [2024-11-20 08:30:51.794263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.406 [2024-11-20 08:30:51.794297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.406 [2024-11-20 08:30:51.794343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.406 [2024-11-20 08:30:51.811951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.406 [2024-11-20 08:30:51.811989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.406 [2024-11-20 08:30:51.812003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.406 [2024-11-20 08:30:51.829535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.406 [2024-11-20 08:30:51.829573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.406 [2024-11-20 08:30:51.829588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.406 [2024-11-20 08:30:51.846479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.406 [2024-11-20 08:30:51.846515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.406 [2024-11-20 08:30:51.846528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.406 [2024-11-20 08:30:51.862618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.406 [2024-11-20 08:30:51.862654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.406 [2024-11-20 08:30:51.862668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.406 [2024-11-20 08:30:51.879657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.406 [2024-11-20 08:30:51.879695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.406 [2024-11-20 08:30:51.879709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.406 15054.00 IOPS, 58.80 MiB/s [2024-11-20T08:30:51.967Z] [2024-11-20 08:30:51.895942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.406 [2024-11-20 08:30:51.895977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.406 [2024-11-20 08:30:51.895990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.406 [2024-11-20 08:30:51.911554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.406 [2024-11-20 08:30:51.911595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.406 [2024-11-20 08:30:51.911616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.406 [2024-11-20 08:30:51.927264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.406 [2024-11-20 08:30:51.927298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.406 [2024-11-20 08:30:51.927311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.406 [2024-11-20 08:30:51.943804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.406 [2024-11-20 08:30:51.944043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.406 [2024-11-20 08:30:51.944061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.665 [2024-11-20 08:30:51.966557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.666 [2024-11-20 08:30:51.966597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.666 [2024-11-20 08:30:51.966611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.666 [2024-11-20 08:30:51.982791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.666 [2024-11-20 08:30:51.982981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.666 [2024-11-20 08:30:51.983087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.666 [2024-11-20 08:30:52.000433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.666 [2024-11-20 08:30:52.000630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.666 [2024-11-20 08:30:52.000648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.666 [2024-11-20 08:30:52.017508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.666 [2024-11-20 08:30:52.017739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.666 [2024-11-20 08:30:52.017917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.666 [2024-11-20 08:30:52.034263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.666 [2024-11-20 08:30:52.034490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.666 [2024-11-20 08:30:52.034646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.666 [2024-11-20 08:30:52.052058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.666 [2024-11-20 08:30:52.052264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.666 [2024-11-20 08:30:52.052410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.666 [2024-11-20 08:30:52.069486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.666 [2024-11-20 08:30:52.069692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.666 [2024-11-20 08:30:52.069884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.666 [2024-11-20 08:30:52.086946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.666 [2024-11-20 08:30:52.087156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.666 [2024-11-20 08:30:52.087290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.666 [2024-11-20 08:30:52.103088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.666 [2024-11-20 08:30:52.103266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.666 [2024-11-20 08:30:52.103410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.666 [2024-11-20 08:30:52.120487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.666 [2024-11-20 08:30:52.120670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.666 [2024-11-20 08:30:52.120794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.666 [2024-11-20 08:30:52.138282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.666 [2024-11-20 08:30:52.138465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.666 [2024-11-20 08:30:52.138634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.666 [2024-11-20 08:30:52.156095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.666 [2024-11-20 08:30:52.156284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.666 [2024-11-20 08:30:52.156317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.666 [2024-11-20 08:30:52.172582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.666 [2024-11-20 08:30:52.172618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.666 [2024-11-20 08:30:52.172647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.666 [2024-11-20 08:30:52.189832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.666 [2024-11-20 08:30:52.189869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.666 [2024-11-20 08:30:52.189898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.666 [2024-11-20 08:30:52.205883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.666 [2024-11-20 08:30:52.205920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.666 [2024-11-20 08:30:52.205951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.666 [2024-11-20 08:30:52.221736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.666 [2024-11-20 08:30:52.221773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.666 [2024-11-20 08:30:52.221803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.925 [2024-11-20 08:30:52.238053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.925 [2024-11-20 08:30:52.238088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.925 [2024-11-20 08:30:52.238117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.925 [2024-11-20 08:30:52.254194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.926 [2024-11-20 08:30:52.254230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.926 [2024-11-20 08:30:52.254259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.926 [2024-11-20 08:30:52.270466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.926 [2024-11-20 08:30:52.270501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.926 [2024-11-20 08:30:52.270530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.926 [2024-11-20 08:30:52.286853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.926 [2024-11-20 08:30:52.286907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.926 [2024-11-20 08:30:52.286938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.926 [2024-11-20 08:30:52.304201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.926 [2024-11-20 08:30:52.304241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.926 [2024-11-20 08:30:52.304256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.926 [2024-11-20 08:30:52.321911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.926 [2024-11-20 08:30:52.322117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.926 [2024-11-20 08:30:52.322136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.926 [2024-11-20 08:30:52.338156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.926 [2024-11-20 08:30:52.338190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.926 [2024-11-20 08:30:52.338220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.926 [2024-11-20 08:30:52.353833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.926 [2024-11-20 08:30:52.353867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.926 [2024-11-20 08:30:52.353896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.926 [2024-11-20 08:30:52.370243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.926 [2024-11-20 08:30:52.370278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.926 [2024-11-20 08:30:52.370323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.926 [2024-11-20 08:30:52.385972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.926 [2024-11-20 08:30:52.386019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.926 [2024-11-20 08:30:52.386032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.926 [2024-11-20 08:30:52.401454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.926 [2024-11-20 08:30:52.401490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.926 [2024-11-20 08:30:52.401502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.926 [2024-11-20 08:30:52.418409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.926 [2024-11-20 08:30:52.418444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.926 [2024-11-20 08:30:52.418457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.926 [2024-11-20 08:30:52.435079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.926 [2024-11-20 08:30:52.435113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.926 [2024-11-20 08:30:52.435141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.926 [2024-11-20 08:30:52.451741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.926 [2024-11-20 08:30:52.451786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.926 [2024-11-20 08:30:52.451818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.926 [2024-11-20 08:30:52.467957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:04.926 [2024-11-20 08:30:52.468138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.926 [2024-11-20 08:30:52.468171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.185 [2024-11-20 08:30:52.485012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.185 [2024-11-20 08:30:52.485051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.185 [2024-11-20 08:30:52.485082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.185 [2024-11-20 08:30:52.502641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.185 [2024-11-20 08:30:52.502698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.185 [2024-11-20 08:30:52.502728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.185 [2024-11-20 08:30:52.519231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.185 [2024-11-20 08:30:52.519265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.185 [2024-11-20 08:30:52.519292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.185 [2024-11-20 08:30:52.534722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.185 [2024-11-20 08:30:52.534756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.185 [2024-11-20 08:30:52.534797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.185 [2024-11-20 08:30:52.550074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.186 [2024-11-20 08:30:52.550107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.186 [2024-11-20 08:30:52.550136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.186 [2024-11-20 08:30:52.566005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.186 [2024-11-20 08:30:52.566042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.186 [2024-11-20 08:30:52.566056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.186 [2024-11-20 08:30:52.582329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.186 [2024-11-20 08:30:52.582363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.186 [2024-11-20 08:30:52.582391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.186 [2024-11-20 08:30:52.597719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.186 [2024-11-20 08:30:52.597754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.186 [2024-11-20 08:30:52.597782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.186 [2024-11-20 08:30:52.612931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.186 [2024-11-20 08:30:52.612965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.186 [2024-11-20 08:30:52.612993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.186 [2024-11-20 08:30:52.628023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.186 [2024-11-20 08:30:52.628217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.186 [2024-11-20 08:30:52.628250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.186 [2024-11-20 08:30:52.643646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.186 [2024-11-20 08:30:52.643857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.186 [2024-11-20 08:30:52.643876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.186 [2024-11-20 08:30:52.659406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.186 [2024-11-20 08:30:52.659646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.186 [2024-11-20 08:30:52.659830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.186 [2024-11-20 08:30:52.675371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.186 [2024-11-20 08:30:52.675563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.186 [2024-11-20 08:30:52.675750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.186 [2024-11-20 08:30:52.691650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.186 [2024-11-20 08:30:52.691859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.186 [2024-11-20 08:30:52.692039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.186 [2024-11-20 08:30:52.708439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.186 [2024-11-20 08:30:52.708651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.186 [2024-11-20 08:30:52.708895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.186 [2024-11-20 08:30:52.725223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.186 [2024-11-20 08:30:52.725402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.186 [2024-11-20 08:30:52.725540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.186 [2024-11-20 08:30:52.741518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.186 [2024-11-20 08:30:52.741714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.186 [2024-11-20 08:30:52.741920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.446 [2024-11-20 08:30:52.759320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.446 [2024-11-20 08:30:52.759522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.446 [2024-11-20 08:30:52.759711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.446 [2024-11-20 08:30:52.777034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.446 [2024-11-20 08:30:52.777244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.446 [2024-11-20 08:30:52.777383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.446 [2024-11-20 08:30:52.794246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.446 [2024-11-20 08:30:52.794458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.446 [2024-11-20 08:30:52.794652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.446 [2024-11-20 08:30:52.812015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.446 [2024-11-20 08:30:52.812216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.446 [2024-11-20 08:30:52.812322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.446 [2024-11-20 08:30:52.830150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.446 [2024-11-20 08:30:52.830186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.446 [2024-11-20 08:30:52.830217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.446 [2024-11-20 08:30:52.847345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.446 [2024-11-20 08:30:52.847381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.446 [2024-11-20 08:30:52.847410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.446 [2024-11-20 08:30:52.864289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.446 [2024-11-20 08:30:52.864485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.446 [2024-11-20 08:30:52.864502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.446 [2024-11-20 08:30:52.880454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5c2c0) 00:18:05.446 [2024-11-20 08:30:52.880668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.446 [2024-11-20 08:30:52.880785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.446 15180.50 IOPS, 59.30 MiB/s 00:18:05.446 Latency(us) 00:18:05.446 [2024-11-20T08:30:53.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.446 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:05.446 nvme0n1 : 2.01 15192.04 59.34 0.00 0.00 8416.92 7298.33 30742.34 00:18:05.446 [2024-11-20T08:30:53.007Z] =================================================================================================================== 00:18:05.446 [2024-11-20T08:30:53.007Z] Total : 15192.04 59.34 0.00 0.00 8416.92 7298.33 30742.34 00:18:05.446 { 00:18:05.446 "results": [ 00:18:05.446 { 00:18:05.446 "job": "nvme0n1", 00:18:05.446 "core_mask": "0x2", 00:18:05.446 "workload": "randread", 00:18:05.446 "status": "finished", 00:18:05.446 "queue_depth": 128, 00:18:05.446 "io_size": 4096, 00:18:05.446 "runtime": 2.006906, 00:18:05.446 "iops": 15192.041879390465, 00:18:05.446 "mibps": 59.343913591369, 00:18:05.446 "io_failed": 0, 00:18:05.446 "io_timeout": 0, 00:18:05.446 "avg_latency_us": 8416.920386786293, 00:18:05.446 "min_latency_us": 7298.327272727272, 00:18:05.446 "max_latency_us": 30742.34181818182 00:18:05.446 } 00:18:05.446 ], 00:18:05.446 "core_count": 1 00:18:05.446 } 00:18:05.446 08:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:05.446 08:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:05.446 | .driver_specific 00:18:05.446 | .nvme_error 00:18:05.446 | .status_code 00:18:05.446 | .command_transient_transport_error' 00:18:05.446 08:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:05.446 08:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:05.705 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 119 > 0 )) 00:18:05.705 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80395 00:18:05.705 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' -z 80395 ']' 00:18:05.705 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill -0 80395 00:18:05.705 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # uname 00:18:05.705 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:18:05.705 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 80395 00:18:05.705 killing process with pid 80395 00:18:05.705 Received shutdown signal, test time was about 2.000000 seconds 00:18:05.705 00:18:05.706 Latency(us) 00:18:05.706 [2024-11-20T08:30:53.267Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.706 [2024-11-20T08:30:53.267Z] =================================================================================================================== 00:18:05.706 [2024-11-20T08:30:53.267Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:05.706 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:18:05.706 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:18:05.706 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@975 -- # echo 'killing process with pid 80395' 00:18:05.706 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # kill 80395 00:18:05.706 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@981 -- # wait 80395 00:18:05.965 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:05.965 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:05.965 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:05.965 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:05.965 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:05.965 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80448 00:18:05.965 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:05.965 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80448 /var/tmp/bperf.sock 00:18:05.965 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # '[' -z 80448 ']' 00:18:05.965 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:05.965 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@843 -- # local max_retries=100 00:18:05.965 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:05.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:05.965 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@847 -- # xtrace_disable 00:18:05.965 08:30:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:05.965 [2024-11-20 08:30:53.506124] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:18:05.965 [2024-11-20 08:30:53.506454] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80448 ] 00:18:05.965 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:05.965 Zero copy mechanism will not be used. 00:18:06.225 [2024-11-20 08:30:53.648448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.225 [2024-11-20 08:30:53.695718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.225 [2024-11-20 08:30:53.750362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:07.163 08:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:18:07.163 08:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@871 -- # return 0 00:18:07.163 08:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:07.163 08:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:07.422 08:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:07.422 08:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@566 -- # xtrace_disable 00:18:07.422 08:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:07.422 08:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:18:07.422 08:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:07.422 08:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:07.681 nvme0n1 00:18:07.681 08:30:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:07.681 08:30:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@566 -- # xtrace_disable 00:18:07.681 08:30:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:07.681 08:30:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:18:07.681 08:30:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:07.681 08:30:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:07.681 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:07.681 Zero copy mechanism will not be used. 00:18:07.681 Running I/O for 2 seconds... 00:18:07.942 [2024-11-20 08:30:55.249746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.942 [2024-11-20 08:30:55.249871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.942 [2024-11-20 08:30:55.249888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.942 [2024-11-20 08:30:55.254142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.942 [2024-11-20 08:30:55.254194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.942 [2024-11-20 08:30:55.254223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.942 [2024-11-20 08:30:55.258461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.942 [2024-11-20 08:30:55.258498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.942 [2024-11-20 08:30:55.258527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.942 [2024-11-20 08:30:55.263032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.942 [2024-11-20 08:30:55.263075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.942 [2024-11-20 08:30:55.263089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.942 [2024-11-20 08:30:55.267303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.942 [2024-11-20 08:30:55.267342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.942 [2024-11-20 08:30:55.267356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.942 [2024-11-20 08:30:55.271634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.942 [2024-11-20 08:30:55.271673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.942 [2024-11-20 08:30:55.271687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.942 [2024-11-20 08:30:55.276069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.942 [2024-11-20 08:30:55.276106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.942 [2024-11-20 08:30:55.276136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.942 [2024-11-20 08:30:55.280526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.942 [2024-11-20 08:30:55.280710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.942 [2024-11-20 08:30:55.280729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.942 [2024-11-20 08:30:55.285086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.942 [2024-11-20 08:30:55.285125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.942 [2024-11-20 08:30:55.285155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.942 [2024-11-20 08:30:55.289572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.942 [2024-11-20 08:30:55.289616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.942 [2024-11-20 08:30:55.289646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.942 [2024-11-20 08:30:55.293830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.942 [2024-11-20 08:30:55.293914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.942 [2024-11-20 08:30:55.293929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.942 [2024-11-20 08:30:55.298235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.942 [2024-11-20 08:30:55.298270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.942 [2024-11-20 08:30:55.298298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.942 [2024-11-20 08:30:55.302475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.942 [2024-11-20 08:30:55.302511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.942 [2024-11-20 08:30:55.302540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.942 [2024-11-20 08:30:55.306733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.942 [2024-11-20 08:30:55.306768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.942 [2024-11-20 08:30:55.306796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.942 [2024-11-20 08:30:55.310764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.942 [2024-11-20 08:30:55.310831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.942 [2024-11-20 08:30:55.310862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.942 [2024-11-20 08:30:55.314778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.942 [2024-11-20 08:30:55.314823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.942 [2024-11-20 08:30:55.314853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.942 [2024-11-20 08:30:55.318817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.942 [2024-11-20 08:30:55.318883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.942 [2024-11-20 08:30:55.318914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.942 [2024-11-20 08:30:55.322909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.942 [2024-11-20 08:30:55.322944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.942 [2024-11-20 08:30:55.322988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.942 [2024-11-20 08:30:55.326954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.942 [2024-11-20 08:30:55.326990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.942 [2024-11-20 08:30:55.327018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.942 [2024-11-20 08:30:55.331131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.942 [2024-11-20 08:30:55.331168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.942 [2024-11-20 08:30:55.331196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.942 [2024-11-20 08:30:55.335259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.942 [2024-11-20 08:30:55.335294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.335340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.339852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.339921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.339934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.344163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.344201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.344230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.348746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.348784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.348796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.353184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.353237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.353250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.357886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.357934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.357948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.362614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.362707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.362734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.367366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.367522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.367539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.372110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.372164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.372178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.376812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.376900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.376916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.381444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.381481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.381495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.385878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.385927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.385942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.390446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.390624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.390641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.395296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.395349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.395362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.399583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.399653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.399666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.403865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.403921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.403934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.408111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.408165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.408192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.412504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.412542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.412555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.416689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.416726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.416738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.421053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.421094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.421108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.425352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.425392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.425406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.429769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.429819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.429865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.434144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.434181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.434194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.438201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.438239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.438251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.442326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.442379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.442391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.446461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.446496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.446509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.450603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.943 [2024-11-20 08:30:55.450655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.943 [2024-11-20 08:30:55.450667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.943 [2024-11-20 08:30:55.454761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.944 [2024-11-20 08:30:55.454798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.944 [2024-11-20 08:30:55.454841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.944 [2024-11-20 08:30:55.458933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.944 [2024-11-20 08:30:55.458971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.944 [2024-11-20 08:30:55.458983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.944 [2024-11-20 08:30:55.462969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.944 [2024-11-20 08:30:55.463006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.944 [2024-11-20 08:30:55.463018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.944 [2024-11-20 08:30:55.467037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.944 [2024-11-20 08:30:55.467072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.944 [2024-11-20 08:30:55.467085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.944 [2024-11-20 08:30:55.471009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.944 [2024-11-20 08:30:55.471045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.944 [2024-11-20 08:30:55.471057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.944 [2024-11-20 08:30:55.475274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.944 [2024-11-20 08:30:55.475312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.944 [2024-11-20 08:30:55.475324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.944 [2024-11-20 08:30:55.479331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.944 [2024-11-20 08:30:55.479366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.944 [2024-11-20 08:30:55.479380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.944 [2024-11-20 08:30:55.483409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.944 [2024-11-20 08:30:55.483444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.944 [2024-11-20 08:30:55.483456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:07.944 [2024-11-20 08:30:55.487616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.944 [2024-11-20 08:30:55.487653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.944 [2024-11-20 08:30:55.487667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:07.944 [2024-11-20 08:30:55.491812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.944 [2024-11-20 08:30:55.491859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.944 [2024-11-20 08:30:55.491872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:07.944 [2024-11-20 08:30:55.495819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:07.944 [2024-11-20 08:30:55.495890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.944 [2024-11-20 08:30:55.495904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.205 [2024-11-20 08:30:55.500037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.205 [2024-11-20 08:30:55.500077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.205 [2024-11-20 08:30:55.500091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.205 [2024-11-20 08:30:55.504592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.504644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.504657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.508697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.508734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.508746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.512889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.512939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.512953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.517096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.517131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.517161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.521404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.521440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.521469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.525598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.525651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.525698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.529853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.529887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.529915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.533868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.533903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.533932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.537861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.537895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.537924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.541968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.542003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.542032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.546045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.546081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.546109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.550058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.550094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.550139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.554276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.554322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.554336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.558520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.558560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.558573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.562735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.562772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.562802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.567072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.567112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.567126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.571362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.571401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.571415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.575545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.575581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.575645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.579738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.579777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.579791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.583973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.584009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.584038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.588109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.588146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.588174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.592067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.592102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.592130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.596251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.596287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.596316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.600275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.600325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.600353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.604376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.604413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.604441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.608534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.608569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.608598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.206 [2024-11-20 08:30:55.612792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.206 [2024-11-20 08:30:55.612892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.206 [2024-11-20 08:30:55.612906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.616812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.616873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.616887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.620908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.620945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.620973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.624998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.625033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.625062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.629073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.629109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.629137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.633136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.633172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.633201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.637255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.637291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.637330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.641324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.641359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.641388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.645466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.645502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.645515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.649697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.649733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.649761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.653677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.653712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.653741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.657667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.657703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.657731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.661739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.661790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.661849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.665797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.665863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.665892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.669884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.669920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.669949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.674351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.674388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.674417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.678756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.678795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.678837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.683155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.683196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.683225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.687840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.687878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.687892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.692236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.692275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.692305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.696776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.696846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.696862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.701345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.701542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.701576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.706320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.706495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.706667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.711048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.711221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.711349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.715814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.716016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.716154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.720566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.720729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.720777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.724927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.724965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.724994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.207 [2024-11-20 08:30:55.729027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.207 [2024-11-20 08:30:55.729063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.207 [2024-11-20 08:30:55.729091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.208 [2024-11-20 08:30:55.733301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.208 [2024-11-20 08:30:55.733339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.208 [2024-11-20 08:30:55.733368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.208 [2024-11-20 08:30:55.737538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.208 [2024-11-20 08:30:55.737577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.208 [2024-11-20 08:30:55.737590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.208 [2024-11-20 08:30:55.741679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.208 [2024-11-20 08:30:55.741716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.208 [2024-11-20 08:30:55.741744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.208 [2024-11-20 08:30:55.745874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.208 [2024-11-20 08:30:55.745912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.208 [2024-11-20 08:30:55.745941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.208 [2024-11-20 08:30:55.750103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.208 [2024-11-20 08:30:55.750138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.208 [2024-11-20 08:30:55.750168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.208 [2024-11-20 08:30:55.754326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.208 [2024-11-20 08:30:55.754361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.208 [2024-11-20 08:30:55.754389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.208 [2024-11-20 08:30:55.758589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.208 [2024-11-20 08:30:55.758627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.208 [2024-11-20 08:30:55.758640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.469 [2024-11-20 08:30:55.762947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.469 [2024-11-20 08:30:55.762999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.469 [2024-11-20 08:30:55.763028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.469 [2024-11-20 08:30:55.767057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.469 [2024-11-20 08:30:55.767095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.469 [2024-11-20 08:30:55.767125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.469 [2024-11-20 08:30:55.771121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.469 [2024-11-20 08:30:55.771156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.469 [2024-11-20 08:30:55.771184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.469 [2024-11-20 08:30:55.775204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.469 [2024-11-20 08:30:55.775239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.469 [2024-11-20 08:30:55.775268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.469 [2024-11-20 08:30:55.779642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.469 [2024-11-20 08:30:55.779677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.469 [2024-11-20 08:30:55.779706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.469 [2024-11-20 08:30:55.783761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.469 [2024-11-20 08:30:55.783830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.469 [2024-11-20 08:30:55.783862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.469 [2024-11-20 08:30:55.787878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.469 [2024-11-20 08:30:55.787944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.469 [2024-11-20 08:30:55.787973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.469 [2024-11-20 08:30:55.792109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.469 [2024-11-20 08:30:55.792148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.469 [2024-11-20 08:30:55.792162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.469 [2024-11-20 08:30:55.796712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.469 [2024-11-20 08:30:55.796752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.469 [2024-11-20 08:30:55.796782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.469 [2024-11-20 08:30:55.801267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.469 [2024-11-20 08:30:55.801306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.469 [2024-11-20 08:30:55.801336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.469 [2024-11-20 08:30:55.805712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.469 [2024-11-20 08:30:55.805752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.469 [2024-11-20 08:30:55.805766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.469 [2024-11-20 08:30:55.810093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.469 [2024-11-20 08:30:55.810129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.469 [2024-11-20 08:30:55.810174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.469 [2024-11-20 08:30:55.814578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.469 [2024-11-20 08:30:55.814616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.469 [2024-11-20 08:30:55.814646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.469 [2024-11-20 08:30:55.818963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.469 [2024-11-20 08:30:55.818998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.469 [2024-11-20 08:30:55.819026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.469 [2024-11-20 08:30:55.823244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.469 [2024-11-20 08:30:55.823284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.469 [2024-11-20 08:30:55.823297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.469 [2024-11-20 08:30:55.827686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.469 [2024-11-20 08:30:55.827724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.469 [2024-11-20 08:30:55.827755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.469 [2024-11-20 08:30:55.832063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.469 [2024-11-20 08:30:55.832100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.469 [2024-11-20 08:30:55.832128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.469 [2024-11-20 08:30:55.836293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.469 [2024-11-20 08:30:55.836331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.469 [2024-11-20 08:30:55.836360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.840747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.840786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.840832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.845128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.845181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.845210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.849540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.849602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.849631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.853987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.854024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.854053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.858264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.858299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.858327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.862664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.862703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.862732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.867282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.867333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.867363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.871709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.871748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.871762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.876152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.876189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.876218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.880640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.880681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.880695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.885082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.885122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.885136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.889635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.889676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.889690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.894253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.894290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.894330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.898798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.898866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.898881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.903265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.903305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.903319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.907786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.907840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.907854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.912155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.912364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.912382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.916924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.916970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.916998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.921176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.921239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.921269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.925256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.925291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.925319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.929278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.929314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.929342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.933432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.933468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.933496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.937415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.937451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.937479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.941529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.941581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.941610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.945735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.945788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.945825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.949713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.949750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.470 [2024-11-20 08:30:55.949779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.470 [2024-11-20 08:30:55.953760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.470 [2024-11-20 08:30:55.953797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.471 [2024-11-20 08:30:55.953834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.471 [2024-11-20 08:30:55.957770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.471 [2024-11-20 08:30:55.957835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.471 [2024-11-20 08:30:55.957865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.471 [2024-11-20 08:30:55.961771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.471 [2024-11-20 08:30:55.961828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.471 [2024-11-20 08:30:55.961842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.471 [2024-11-20 08:30:55.965786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.471 [2024-11-20 08:30:55.965845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.471 [2024-11-20 08:30:55.965859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.471 [2024-11-20 08:30:55.970125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.471 [2024-11-20 08:30:55.970165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.471 [2024-11-20 08:30:55.970178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.471 [2024-11-20 08:30:55.974611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.471 [2024-11-20 08:30:55.974648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.471 [2024-11-20 08:30:55.974676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.471 [2024-11-20 08:30:55.979045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.471 [2024-11-20 08:30:55.979083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.471 [2024-11-20 08:30:55.979112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.471 [2024-11-20 08:30:55.983370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.471 [2024-11-20 08:30:55.983408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.471 [2024-11-20 08:30:55.983422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.471 [2024-11-20 08:30:55.987704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.471 [2024-11-20 08:30:55.987743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.471 [2024-11-20 08:30:55.987758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.471 [2024-11-20 08:30:55.992125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.471 [2024-11-20 08:30:55.992178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.471 [2024-11-20 08:30:55.992207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.471 [2024-11-20 08:30:55.996519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.471 [2024-11-20 08:30:55.996574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.471 [2024-11-20 08:30:55.996619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.471 [2024-11-20 08:30:56.001023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.471 [2024-11-20 08:30:56.001060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.471 [2024-11-20 08:30:56.001089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.471 [2024-11-20 08:30:56.005462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.471 [2024-11-20 08:30:56.005500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.471 [2024-11-20 08:30:56.005528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.471 [2024-11-20 08:30:56.009888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.471 [2024-11-20 08:30:56.009966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.471 [2024-11-20 08:30:56.009994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.471 [2024-11-20 08:30:56.014239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.471 [2024-11-20 08:30:56.014275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.471 [2024-11-20 08:30:56.014304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.471 [2024-11-20 08:30:56.018521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.471 [2024-11-20 08:30:56.018574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.471 [2024-11-20 08:30:56.018604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.471 [2024-11-20 08:30:56.022987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.471 [2024-11-20 08:30:56.023023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.471 [2024-11-20 08:30:56.023051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.732 [2024-11-20 08:30:56.027256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.732 [2024-11-20 08:30:56.027291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.732 [2024-11-20 08:30:56.027320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.732 [2024-11-20 08:30:56.031310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.732 [2024-11-20 08:30:56.031344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.732 [2024-11-20 08:30:56.031373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.732 [2024-11-20 08:30:56.035424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.732 [2024-11-20 08:30:56.035458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.732 [2024-11-20 08:30:56.035487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.732 [2024-11-20 08:30:56.039828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.732 [2024-11-20 08:30:56.039875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.732 [2024-11-20 08:30:56.039905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.732 [2024-11-20 08:30:56.044230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.732 [2024-11-20 08:30:56.044266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.732 [2024-11-20 08:30:56.044295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.732 [2024-11-20 08:30:56.048722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.732 [2024-11-20 08:30:56.048762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.732 [2024-11-20 08:30:56.048776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.732 [2024-11-20 08:30:56.053244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.732 [2024-11-20 08:30:56.053415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.732 [2024-11-20 08:30:56.053433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.732 [2024-11-20 08:30:56.058000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.732 [2024-11-20 08:30:56.058174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.732 [2024-11-20 08:30:56.058361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.732 [2024-11-20 08:30:56.062753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.732 [2024-11-20 08:30:56.062955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.732 [2024-11-20 08:30:56.063204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.732 [2024-11-20 08:30:56.067560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.732 [2024-11-20 08:30:56.067798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.732 [2024-11-20 08:30:56.067978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.732 [2024-11-20 08:30:56.072453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.732 [2024-11-20 08:30:56.072631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.732 [2024-11-20 08:30:56.072787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.732 [2024-11-20 08:30:56.077315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.732 [2024-11-20 08:30:56.077480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.732 [2024-11-20 08:30:56.077745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.732 [2024-11-20 08:30:56.082381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.732 [2024-11-20 08:30:56.082556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.732 [2024-11-20 08:30:56.082718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.732 [2024-11-20 08:30:56.087166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.732 [2024-11-20 08:30:56.087392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.732 [2024-11-20 08:30:56.087539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.732 [2024-11-20 08:30:56.092095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.732 [2024-11-20 08:30:56.092270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.732 [2024-11-20 08:30:56.092449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.732 [2024-11-20 08:30:56.096731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.732 [2024-11-20 08:30:56.096917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.732 [2024-11-20 08:30:56.097058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.732 [2024-11-20 08:30:56.101367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.732 [2024-11-20 08:30:56.101405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.732 [2024-11-20 08:30:56.101434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.732 [2024-11-20 08:30:56.105516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.732 [2024-11-20 08:30:56.105554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.732 [2024-11-20 08:30:56.105566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.732 [2024-11-20 08:30:56.109551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.732 [2024-11-20 08:30:56.109588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.732 [2024-11-20 08:30:56.109616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.732 [2024-11-20 08:30:56.113721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.113757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.113786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.117776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.117845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.117859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.121677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.121713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.121725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.125634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.125670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.125697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.129751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.129785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.129813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.133751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.133787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.133824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.137572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.137625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.137652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.141536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.141572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.141601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.145500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.145536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.145565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.149638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.149679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.149692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.153858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.153897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.153911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.158148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.158188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.158202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.162372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.162409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.162438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.166752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.166790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.166849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.171270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.171340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.171354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.175685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.175724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.175738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.180211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.180249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.180277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.184646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.184701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.184731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.189128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.189165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.189194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.193349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.193386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.193415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.197704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.197742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.197771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.202067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.202103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.202132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.206429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.206466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.206494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.210814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.210867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.210897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.215127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.215165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.215194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.219666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.219705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.219718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.733 [2024-11-20 08:30:56.224237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.733 [2024-11-20 08:30:56.224277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.733 [2024-11-20 08:30:56.224290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.734 [2024-11-20 08:30:56.228907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.734 [2024-11-20 08:30:56.229023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.734 [2024-11-20 08:30:56.229054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.734 [2024-11-20 08:30:56.233384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.734 [2024-11-20 08:30:56.233420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.734 [2024-11-20 08:30:56.233448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.734 [2024-11-20 08:30:56.237717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.734 [2024-11-20 08:30:56.237759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.734 [2024-11-20 08:30:56.237773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.734 7114.00 IOPS, 889.25 MiB/s [2024-11-20T08:30:56.295Z] [2024-11-20 08:30:56.243638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.734 [2024-11-20 08:30:56.243679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.734 [2024-11-20 08:30:56.243692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.734 [2024-11-20 08:30:56.248079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.734 [2024-11-20 08:30:56.248116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.734 [2024-11-20 08:30:56.248144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.734 [2024-11-20 08:30:56.252450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.734 [2024-11-20 08:30:56.252491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.734 [2024-11-20 08:30:56.252504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.734 [2024-11-20 08:30:56.256773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.734 [2024-11-20 08:30:56.256840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.734 [2024-11-20 08:30:56.256855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.734 [2024-11-20 08:30:56.261167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.734 [2024-11-20 08:30:56.261202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.734 [2024-11-20 08:30:56.261231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.734 [2024-11-20 08:30:56.265521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.734 [2024-11-20 08:30:56.265561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.734 [2024-11-20 08:30:56.265575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.734 [2024-11-20 08:30:56.269862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.734 [2024-11-20 08:30:56.269901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.734 [2024-11-20 08:30:56.269914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.734 [2024-11-20 08:30:56.274151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.734 [2024-11-20 08:30:56.274190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.734 [2024-11-20 08:30:56.274220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.734 [2024-11-20 08:30:56.278646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.734 [2024-11-20 08:30:56.278686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.734 [2024-11-20 08:30:56.278716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.734 [2024-11-20 08:30:56.282857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.734 [2024-11-20 08:30:56.282905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.734 [2024-11-20 08:30:56.282951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.734 [2024-11-20 08:30:56.286977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.734 [2024-11-20 08:30:56.287029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.734 [2024-11-20 08:30:56.287058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.995 [2024-11-20 08:30:56.291231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.995 [2024-11-20 08:30:56.291268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.995 [2024-11-20 08:30:56.291296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.995 [2024-11-20 08:30:56.295678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.995 [2024-11-20 08:30:56.295717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.995 [2024-11-20 08:30:56.295731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.995 [2024-11-20 08:30:56.300088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.995 [2024-11-20 08:30:56.300155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.995 [2024-11-20 08:30:56.300183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.995 [2024-11-20 08:30:56.304501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.995 [2024-11-20 08:30:56.304540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.995 [2024-11-20 08:30:56.304572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.995 [2024-11-20 08:30:56.309079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.995 [2024-11-20 08:30:56.309117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.995 [2024-11-20 08:30:56.309162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.995 [2024-11-20 08:30:56.313510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.995 [2024-11-20 08:30:56.313564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.995 [2024-11-20 08:30:56.313593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.995 [2024-11-20 08:30:56.317968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.995 [2024-11-20 08:30:56.318005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.995 [2024-11-20 08:30:56.318034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.995 [2024-11-20 08:30:56.322285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.995 [2024-11-20 08:30:56.322337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.995 [2024-11-20 08:30:56.322349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.995 [2024-11-20 08:30:56.326486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.995 [2024-11-20 08:30:56.326522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.995 [2024-11-20 08:30:56.326551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.995 [2024-11-20 08:30:56.330891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.330930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.330943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.335204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.335244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.335258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.339561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.339609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.339623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.343941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.343981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.343996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.348312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.348369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.348397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.352723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.352764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.352777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.357063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.357100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.357128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.361499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.361539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.361554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.365968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.366006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.366019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.370336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.370375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.370388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.374721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.374760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.374790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.379067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.379121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.379151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.383581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.383649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.383664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.387911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.387950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.387964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.392253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.392290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.392320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.396591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.396630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.396644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.400940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.400977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.400991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.405372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.405406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.405435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.410004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.410042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.410072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.414417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.414458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.414472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.418721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.418761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.418774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.423229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.423283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.423328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.427746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.427787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.427818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.432283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.432321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.432350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.436671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.436711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.436725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.441130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.441178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.441191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.445361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.996 [2024-11-20 08:30:56.445398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.996 [2024-11-20 08:30:56.445427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.996 [2024-11-20 08:30:56.449757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.449798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.449827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.454058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.454098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.454111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.458385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.458425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.458439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.462663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.462703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.462716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.466866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.466903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.466932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.471037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.471074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.471103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.475197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.475245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.475274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.479418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.479454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.479483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.483765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.483838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.483853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.487963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.488015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.488043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.492114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.492179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.492207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.496253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.496289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.496317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.500212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.500247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.500274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.504245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.504280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.504308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.508348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.508385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.508413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.512700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.512740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.512753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.516979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.517017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.517031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.521365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.521404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.521418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.525847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.525886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.525901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.530134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.530174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.530188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.534441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.534482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.534496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.538871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.538923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.538936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.543250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.543289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.543301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.547760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.547818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.547833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.997 [2024-11-20 08:30:56.552375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:08.997 [2024-11-20 08:30:56.552414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.997 [2024-11-20 08:30:56.552427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.259 [2024-11-20 08:30:56.556875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.259 [2024-11-20 08:30:56.556929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.259 [2024-11-20 08:30:56.556944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.259 [2024-11-20 08:30:56.561444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.259 [2024-11-20 08:30:56.561483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.259 [2024-11-20 08:30:56.561496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.259 [2024-11-20 08:30:56.565870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.259 [2024-11-20 08:30:56.565918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.259 [2024-11-20 08:30:56.565932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.259 [2024-11-20 08:30:56.570072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.259 [2024-11-20 08:30:56.570108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.259 [2024-11-20 08:30:56.570121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.259 [2024-11-20 08:30:56.574537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.259 [2024-11-20 08:30:56.574576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.259 [2024-11-20 08:30:56.574589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.259 [2024-11-20 08:30:56.578774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.259 [2024-11-20 08:30:56.578853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.259 [2024-11-20 08:30:56.578884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.259 [2024-11-20 08:30:56.583272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.259 [2024-11-20 08:30:56.583326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.259 [2024-11-20 08:30:56.583339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.259 [2024-11-20 08:30:56.587520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.259 [2024-11-20 08:30:56.587588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.259 [2024-11-20 08:30:56.587611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.259 [2024-11-20 08:30:56.592060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.259 [2024-11-20 08:30:56.592094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.259 [2024-11-20 08:30:56.592105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.259 [2024-11-20 08:30:56.596475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.259 [2024-11-20 08:30:56.596515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.259 [2024-11-20 08:30:56.596528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.259 [2024-11-20 08:30:56.600719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.259 [2024-11-20 08:30:56.600760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.259 [2024-11-20 08:30:56.600774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.259 [2024-11-20 08:30:56.605152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.259 [2024-11-20 08:30:56.605192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.259 [2024-11-20 08:30:56.605206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.259 [2024-11-20 08:30:56.609490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.259 [2024-11-20 08:30:56.609530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.259 [2024-11-20 08:30:56.609543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.259 [2024-11-20 08:30:56.613822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.259 [2024-11-20 08:30:56.613875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.259 [2024-11-20 08:30:56.613891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.259 [2024-11-20 08:30:56.618205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.259 [2024-11-20 08:30:56.618244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.259 [2024-11-20 08:30:56.618257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.259 [2024-11-20 08:30:56.622746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.259 [2024-11-20 08:30:56.622783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.259 [2024-11-20 08:30:56.622796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.259 [2024-11-20 08:30:56.627152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.259 [2024-11-20 08:30:56.627201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.259 [2024-11-20 08:30:56.627214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.259 [2024-11-20 08:30:56.631504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.259 [2024-11-20 08:30:56.631544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.259 [2024-11-20 08:30:56.631558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.259 [2024-11-20 08:30:56.635990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.259 [2024-11-20 08:30:56.636029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.259 [2024-11-20 08:30:56.636042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.259 [2024-11-20 08:30:56.640257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.259 [2024-11-20 08:30:56.640295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.259 [2024-11-20 08:30:56.640307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.259 [2024-11-20 08:30:56.644616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.259 [2024-11-20 08:30:56.644669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.259 [2024-11-20 08:30:56.644681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.259 [2024-11-20 08:30:56.648918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.259 [2024-11-20 08:30:56.648981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.259 [2024-11-20 08:30:56.648994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.259 [2024-11-20 08:30:56.653392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.259 [2024-11-20 08:30:56.653430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.653442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.657680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.657718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.657730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.661860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.661897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.661910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.665895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.665931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.665943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.670185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.670222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.670235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.674286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.674323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.674335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.678293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.678330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.678343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.682385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.682423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.682436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.686669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.686710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.686724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.690984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.691022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.691035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.695191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.695241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.695254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.699701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.699740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.699754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.704143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.704181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.704210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.708531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.708587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.708602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.712960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.712996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.713025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.717251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.717304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.717333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.721650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.721690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.721720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.726238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.726275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.726304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.730725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.730764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.730793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.735112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.735149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.735178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.739414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.739452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.739465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.743682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.743721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.743734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.747923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.747990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.748018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.752145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.752181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.752209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.756432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.756470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.756500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.760845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.760915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.760959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.765289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.765325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.260 [2024-11-20 08:30:56.765354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.260 [2024-11-20 08:30:56.769743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.260 [2024-11-20 08:30:56.769781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.261 [2024-11-20 08:30:56.769811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.261 [2024-11-20 08:30:56.774331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.261 [2024-11-20 08:30:56.774372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.261 [2024-11-20 08:30:56.774385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.261 [2024-11-20 08:30:56.778625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.261 [2024-11-20 08:30:56.778666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.261 [2024-11-20 08:30:56.778679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.261 [2024-11-20 08:30:56.783015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.261 [2024-11-20 08:30:56.783052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.261 [2024-11-20 08:30:56.783081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.261 [2024-11-20 08:30:56.787430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.261 [2024-11-20 08:30:56.787468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.261 [2024-11-20 08:30:56.787481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.261 [2024-11-20 08:30:56.792026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.261 [2024-11-20 08:30:56.792064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.261 [2024-11-20 08:30:56.792093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.261 [2024-11-20 08:30:56.796417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.261 [2024-11-20 08:30:56.796454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.261 [2024-11-20 08:30:56.796467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.261 [2024-11-20 08:30:56.800774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.261 [2024-11-20 08:30:56.800827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.261 [2024-11-20 08:30:56.800842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.261 [2024-11-20 08:30:56.805201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.261 [2024-11-20 08:30:56.805242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.261 [2024-11-20 08:30:56.805271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.261 [2024-11-20 08:30:56.809878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.261 [2024-11-20 08:30:56.809929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.261 [2024-11-20 08:30:56.809980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.261 [2024-11-20 08:30:56.814523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.261 [2024-11-20 08:30:56.814569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.261 [2024-11-20 08:30:56.814583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.575 [2024-11-20 08:30:56.818840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.575 [2024-11-20 08:30:56.818875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-20 08:30:56.818904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.575 [2024-11-20 08:30:56.822875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.575 [2024-11-20 08:30:56.822910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-20 08:30:56.822938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.575 [2024-11-20 08:30:56.827278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.575 [2024-11-20 08:30:56.827316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-20 08:30:56.827345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.575 [2024-11-20 08:30:56.831628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.575 [2024-11-20 08:30:56.831666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-20 08:30:56.831680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.575 [2024-11-20 08:30:56.836164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.575 [2024-11-20 08:30:56.836200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-20 08:30:56.836228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.575 [2024-11-20 08:30:56.840323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.575 [2024-11-20 08:30:56.840359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-20 08:30:56.840388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.575 [2024-11-20 08:30:56.844249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.575 [2024-11-20 08:30:56.844285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-20 08:30:56.844313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.575 [2024-11-20 08:30:56.848259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.575 [2024-11-20 08:30:56.848293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-20 08:30:56.848321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.575 [2024-11-20 08:30:56.852268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.575 [2024-11-20 08:30:56.852304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-20 08:30:56.852332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.575 [2024-11-20 08:30:56.856697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.575 [2024-11-20 08:30:56.856735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.575 [2024-11-20 08:30:56.856764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.575 [2024-11-20 08:30:56.861298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.861336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.861366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.865538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.865608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.865636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.869931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.869996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.870011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.874471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.874510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.874539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.879176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.879212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.879240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.883529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.883583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.883605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.888063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.888102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.888115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.892358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.892397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.892411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.896679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.896734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.896763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.901003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.901042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.901056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.905234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.905274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.905288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.909607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.909648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.909662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.913954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.913994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.914008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.918242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.918280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.918309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.922811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.922878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.922893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.927303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.927359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.927373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.931838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.931878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.931891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.936205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.936243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.936272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.940739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.940778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.940807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.945149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.945186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.945227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.949495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.949533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.949562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.953837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.953903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.953934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.958085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.958121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.958150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.962280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.962331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.962343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.966605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.966646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.966659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.970896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.970933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.970962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.576 [2024-11-20 08:30:56.975500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.576 [2024-11-20 08:30:56.975539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.576 [2024-11-20 08:30:56.975568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:56.979997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:56.980034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:56.980064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:56.984393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:56.984430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:56.984458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:56.989463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:56.989504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:56.989518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:56.993691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:56.993730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:56.993744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:56.998096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:56.998133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:56.998163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:57.002557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:57.002597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:57.002611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:57.006875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:57.006915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:57.006929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:57.011273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:57.011322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:57.011351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:57.015696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:57.015735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:57.015748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:57.019932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:57.019984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:57.020012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:57.024243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:57.024284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:57.024297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:57.028614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:57.028665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:57.028694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:57.033130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:57.033170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:57.033183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:57.037459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:57.037498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:57.037528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:57.041808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:57.041861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:57.041876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:57.045999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:57.046035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:57.046064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:57.050487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:57.050525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:57.050563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:57.054802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:57.054865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:57.054896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:57.058954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:57.058990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:57.059018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:57.063068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:57.063101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:57.063129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:57.067145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:57.067181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:57.067209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:57.071191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:57.071225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:57.071253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:57.075277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:57.075312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:57.075340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:57.079308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:57.079343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:57.079371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:57.083308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:57.083359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:57.083371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:57.087381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:57.087417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:57.087445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.577 [2024-11-20 08:30:57.091688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.577 [2024-11-20 08:30:57.091725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.577 [2024-11-20 08:30:57.091755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.578 [2024-11-20 08:30:57.096055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.578 [2024-11-20 08:30:57.096096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.578 [2024-11-20 08:30:57.096110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.578 [2024-11-20 08:30:57.100473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.578 [2024-11-20 08:30:57.100510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.578 [2024-11-20 08:30:57.100540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.578 [2024-11-20 08:30:57.104718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.578 [2024-11-20 08:30:57.104754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.578 [2024-11-20 08:30:57.104783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.578 [2024-11-20 08:30:57.109130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.578 [2024-11-20 08:30:57.109167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.578 [2024-11-20 08:30:57.109196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.578 [2024-11-20 08:30:57.113387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.578 [2024-11-20 08:30:57.113424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.578 [2024-11-20 08:30:57.113453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.578 [2024-11-20 08:30:57.117854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.578 [2024-11-20 08:30:57.117924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.578 [2024-11-20 08:30:57.117939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.578 [2024-11-20 08:30:57.122391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.578 [2024-11-20 08:30:57.122430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.578 [2024-11-20 08:30:57.122460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.578 [2024-11-20 08:30:57.126906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.578 [2024-11-20 08:30:57.126958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.578 [2024-11-20 08:30:57.126986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.131342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.131381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.131411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.135910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.135964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.135993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.140320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.140358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.140387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.144822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.144891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.144905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.149187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.149224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.149252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.153660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.153699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.153712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.157792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.157877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.157892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.162063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.162099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.162127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.166294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.166347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.166375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.170471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.170506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.170534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.174966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.175004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.175034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.179455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.179492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.179521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.184063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.184100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.184129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.188335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.188375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.188389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.192721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.192762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.192776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.197129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.197324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.197357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.201926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.202003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.202033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.206390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.206446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.206459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.210873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.210926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.210971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.215236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.215273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.215301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.219704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.219743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.219756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.224247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.224287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.224301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.228717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.228952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.228985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.233588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.233630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.233644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.849 [2024-11-20 08:30:57.237983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2301400) 00:18:09.849 [2024-11-20 08:30:57.238034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.849 [2024-11-20 08:30:57.238063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.850 7106.50 IOPS, 888.31 MiB/s 00:18:09.850 Latency(us) 00:18:09.850 [2024-11-20T08:30:57.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.850 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:09.850 nvme0n1 : 2.00 7103.60 887.95 0.00 0.00 2248.94 1846.92 13166.78 00:18:09.850 [2024-11-20T08:30:57.411Z] =================================================================================================================== 00:18:09.850 [2024-11-20T08:30:57.411Z] Total : 7103.60 887.95 0.00 0.00 2248.94 1846.92 13166.78 00:18:09.850 { 00:18:09.850 "results": [ 00:18:09.850 { 00:18:09.850 "job": "nvme0n1", 00:18:09.850 "core_mask": "0x2", 00:18:09.850 "workload": "randread", 00:18:09.850 "status": "finished", 00:18:09.850 "queue_depth": 16, 00:18:09.850 "io_size": 131072, 00:18:09.850 "runtime": 2.00307, 00:18:09.850 "iops": 7103.595980170438, 00:18:09.850 "mibps": 887.9494975213048, 00:18:09.850 "io_failed": 0, 00:18:09.850 "io_timeout": 0, 00:18:09.850 "avg_latency_us": 2248.944208434759, 00:18:09.850 "min_latency_us": 1846.9236363636364, 00:18:09.850 "max_latency_us": 13166.778181818181 00:18:09.850 } 00:18:09.850 ], 00:18:09.850 "core_count": 1 00:18:09.850 } 00:18:09.850 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:09.850 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:09.850 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:09.850 | .driver_specific 00:18:09.850 | .nvme_error 00:18:09.850 | .status_code 00:18:09.850 | .command_transient_transport_error' 00:18:09.850 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:10.109 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 459 > 0 )) 00:18:10.109 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80448 00:18:10.109 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' -z 80448 ']' 00:18:10.109 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill -0 80448 00:18:10.109 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # uname 00:18:10.109 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:18:10.109 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 80448 00:18:10.109 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:18:10.109 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:18:10.109 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@975 -- # echo 'killing process with pid 80448' 00:18:10.109 killing process with pid 80448 00:18:10.109 Received shutdown signal, test time was about 2.000000 seconds 00:18:10.109 00:18:10.109 Latency(us) 00:18:10.109 [2024-11-20T08:30:57.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.109 [2024-11-20T08:30:57.670Z] =================================================================================================================== 00:18:10.109 [2024-11-20T08:30:57.670Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:10.109 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # kill 80448 00:18:10.109 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@981 -- # wait 80448 00:18:10.368 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:10.368 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:10.368 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:10.368 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:10.368 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:10.368 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80508 00:18:10.368 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:10.368 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80508 /var/tmp/bperf.sock 00:18:10.368 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # '[' -z 80508 ']' 00:18:10.368 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:10.368 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@843 -- # local max_retries=100 00:18:10.368 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:10.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:10.368 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@847 -- # xtrace_disable 00:18:10.368 08:30:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:10.368 [2024-11-20 08:30:57.866959] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:18:10.368 [2024-11-20 08:30:57.867660] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80508 ] 00:18:10.628 [2024-11-20 08:30:58.011149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.628 [2024-11-20 08:30:58.061479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.628 [2024-11-20 08:30:58.116573] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:10.628 08:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:18:10.628 08:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@871 -- # return 0 00:18:10.628 08:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:10.628 08:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:10.887 08:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:10.887 08:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@566 -- # xtrace_disable 00:18:10.887 08:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:10.887 08:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:18:10.887 08:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:10.887 08:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:11.455 nvme0n1 00:18:11.455 08:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:11.455 08:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@566 -- # xtrace_disable 00:18:11.455 08:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:11.455 08:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:18:11.455 08:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:11.455 08:30:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:11.455 Running I/O for 2 seconds... 00:18:11.455 [2024-11-20 08:30:58.927664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f7100 00:18:11.455 [2024-11-20 08:30:58.929278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.455 [2024-11-20 08:30:58.929678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.455 [2024-11-20 08:30:58.944144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f7970 00:18:11.455 [2024-11-20 08:30:58.945996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.455 [2024-11-20 08:30:58.946216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.455 [2024-11-20 08:30:58.961607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f81e0 00:18:11.455 [2024-11-20 08:30:58.963369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.455 [2024-11-20 08:30:58.963607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:11.455 [2024-11-20 08:30:58.978466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f8a50 00:18:11.455 [2024-11-20 08:30:58.980115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.455 [2024-11-20 08:30:58.980383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:11.455 [2024-11-20 08:30:58.994524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f92c0 00:18:11.455 [2024-11-20 08:30:58.996114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.455 [2024-11-20 08:30:58.996332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:11.455 [2024-11-20 08:30:59.010050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f9b30 00:18:11.455 [2024-11-20 08:30:59.011564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.455 [2024-11-20 08:30:59.011793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:11.722 [2024-11-20 08:30:59.026442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166fa3a0 00:18:11.722 [2024-11-20 08:30:59.027859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.722 [2024-11-20 08:30:59.028087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:11.722 [2024-11-20 08:30:59.042244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166fac10 00:18:11.722 [2024-11-20 08:30:59.043929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.722 [2024-11-20 08:30:59.044180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:11.722 [2024-11-20 08:30:59.058064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166fb480 00:18:11.722 [2024-11-20 08:30:59.059752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.722 [2024-11-20 08:30:59.059812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:11.722 [2024-11-20 08:30:59.074789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166fbcf0 00:18:11.722 [2024-11-20 08:30:59.076514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.722 [2024-11-20 08:30:59.076773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:11.722 [2024-11-20 08:30:59.091739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166fc560 00:18:11.722 [2024-11-20 08:30:59.093399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.722 [2024-11-20 08:30:59.093636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:11.722 [2024-11-20 08:30:59.107801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166fcdd0 00:18:11.722 [2024-11-20 08:30:59.109454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.722 [2024-11-20 08:30:59.109708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:11.722 [2024-11-20 08:30:59.124060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166fd640 00:18:11.722 [2024-11-20 08:30:59.125689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.722 [2024-11-20 08:30:59.125988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:11.722 [2024-11-20 08:30:59.141009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166fdeb0 00:18:11.722 [2024-11-20 08:30:59.142616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.722 [2024-11-20 08:30:59.142869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:11.722 [2024-11-20 08:30:59.158111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166fe720 00:18:11.722 [2024-11-20 08:30:59.159745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.722 [2024-11-20 08:30:59.160016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:11.722 [2024-11-20 08:30:59.174976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166ff3c8 00:18:11.722 [2024-11-20 08:30:59.176355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.722 [2024-11-20 08:30:59.176584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:11.722 [2024-11-20 08:30:59.195966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166ff3c8 00:18:11.722 [2024-11-20 08:30:59.198592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.722 [2024-11-20 08:30:59.198819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.722 [2024-11-20 08:30:59.212192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166fe720 00:18:11.722 [2024-11-20 08:30:59.214451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.722 [2024-11-20 08:30:59.214674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:11.722 [2024-11-20 08:30:59.226551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166fdeb0 00:18:11.722 [2024-11-20 08:30:59.228782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.722 [2024-11-20 08:30:59.228986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:11.722 [2024-11-20 08:30:59.241130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166fd640 00:18:11.722 [2024-11-20 08:30:59.243524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.722 [2024-11-20 08:30:59.243726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:11.722 [2024-11-20 08:30:59.256498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166fcdd0 00:18:11.722 [2024-11-20 08:30:59.259280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.722 [2024-11-20 08:30:59.259479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:11.722 [2024-11-20 08:30:59.273532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166fc560 00:18:11.722 [2024-11-20 08:30:59.276079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.722 [2024-11-20 08:30:59.276117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:11.984 [2024-11-20 08:30:59.290319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166fbcf0 00:18:11.984 [2024-11-20 08:30:59.292788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.984 [2024-11-20 08:30:59.292961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:11.984 [2024-11-20 08:30:59.306825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166fb480 00:18:11.984 [2024-11-20 08:30:59.309184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.984 [2024-11-20 08:30:59.309219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:11.984 [2024-11-20 08:30:59.323283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166fac10 00:18:11.984 [2024-11-20 08:30:59.325775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.984 [2024-11-20 08:30:59.325837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:11.984 [2024-11-20 08:30:59.338831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166fa3a0 00:18:11.984 [2024-11-20 08:30:59.341077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.984 [2024-11-20 08:30:59.341111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:11.984 [2024-11-20 08:30:59.354259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f9b30 00:18:11.984 [2024-11-20 08:30:59.356632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.984 [2024-11-20 08:30:59.356852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:11.984 [2024-11-20 08:30:59.370063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f92c0 00:18:11.984 [2024-11-20 08:30:59.372396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.984 [2024-11-20 08:30:59.372548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:11.984 [2024-11-20 08:30:59.385833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f8a50 00:18:11.984 [2024-11-20 08:30:59.387918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.984 [2024-11-20 08:30:59.387955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:11.984 [2024-11-20 08:30:59.400724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f81e0 00:18:11.984 [2024-11-20 08:30:59.403058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.984 [2024-11-20 08:30:59.403092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:11.984 [2024-11-20 08:30:59.415903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f7970 00:18:11.984 [2024-11-20 08:30:59.418354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.984 [2024-11-20 08:30:59.418526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:11.984 [2024-11-20 08:30:59.431969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f7100 00:18:11.984 [2024-11-20 08:30:59.434357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.984 [2024-11-20 08:30:59.434532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:11.984 [2024-11-20 08:30:59.447701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f6890 00:18:11.984 [2024-11-20 08:30:59.450049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.984 [2024-11-20 08:30:59.450087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.984 [2024-11-20 08:30:59.464454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f6020 00:18:11.984 [2024-11-20 08:30:59.466726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.984 [2024-11-20 08:30:59.466779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:11.984 [2024-11-20 08:30:59.480889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f57b0 00:18:11.984 [2024-11-20 08:30:59.483062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.984 [2024-11-20 08:30:59.483097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:11.984 [2024-11-20 08:30:59.497355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f4f40 00:18:11.984 [2024-11-20 08:30:59.499410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.984 [2024-11-20 08:30:59.499443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:11.984 [2024-11-20 08:30:59.512998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f46d0 00:18:11.984 [2024-11-20 08:30:59.515066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.984 [2024-11-20 08:30:59.515099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:11.984 [2024-11-20 08:30:59.528430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f3e60 00:18:11.984 [2024-11-20 08:30:59.530615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.984 [2024-11-20 08:30:59.530643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:12.244 [2024-11-20 08:30:59.544624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f35f0 00:18:12.244 [2024-11-20 08:30:59.546736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.244 [2024-11-20 08:30:59.546772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:12.244 [2024-11-20 08:30:59.560857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f2d80 00:18:12.244 [2024-11-20 08:30:59.562988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.244 [2024-11-20 08:30:59.563040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:12.244 [2024-11-20 08:30:59.576676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f2510 00:18:12.244 [2024-11-20 08:30:59.578882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.244 [2024-11-20 08:30:59.578951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:12.244 [2024-11-20 08:30:59.591337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f1ca0 00:18:12.244 [2024-11-20 08:30:59.593199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.244 [2024-11-20 08:30:59.593230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:12.244 [2024-11-20 08:30:59.605971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f1430 00:18:12.244 [2024-11-20 08:30:59.607958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.244 [2024-11-20 08:30:59.608130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:12.244 [2024-11-20 08:30:59.621787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f0bc0 00:18:12.244 [2024-11-20 08:30:59.623763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.244 [2024-11-20 08:30:59.623810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:12.244 [2024-11-20 08:30:59.636015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f0350 00:18:12.244 [2024-11-20 08:30:59.637995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.244 [2024-11-20 08:30:59.638027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:12.244 [2024-11-20 08:30:59.650340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166efae0 00:18:12.244 [2024-11-20 08:30:59.652312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.244 [2024-11-20 08:30:59.652346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:12.244 [2024-11-20 08:30:59.664707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166ef270 00:18:12.244 [2024-11-20 08:30:59.666641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.245 [2024-11-20 08:30:59.666674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:12.245 [2024-11-20 08:30:59.680271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166eea00 00:18:12.245 [2024-11-20 08:30:59.682369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.245 [2024-11-20 08:30:59.682401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:12.245 [2024-11-20 08:30:59.695355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166ee190 00:18:12.245 [2024-11-20 08:30:59.697116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.245 [2024-11-20 08:30:59.697148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.245 [2024-11-20 08:30:59.709633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166ed920 00:18:12.245 [2024-11-20 08:30:59.711615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.245 [2024-11-20 08:30:59.711649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:12.245 [2024-11-20 08:30:59.725324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166ed0b0 00:18:12.245 [2024-11-20 08:30:59.727121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.245 [2024-11-20 08:30:59.727153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:12.245 [2024-11-20 08:30:59.739894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166ec840 00:18:12.245 [2024-11-20 08:30:59.741777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.245 [2024-11-20 08:30:59.741836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:12.245 [2024-11-20 08:30:59.754518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166ebfd0 00:18:12.245 [2024-11-20 08:30:59.756250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.245 [2024-11-20 08:30:59.756418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:12.245 [2024-11-20 08:30:59.769727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166eb760 00:18:12.245 [2024-11-20 08:30:59.771533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.245 [2024-11-20 08:30:59.771700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:12.245 [2024-11-20 08:30:59.786052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166eaef0 00:18:12.245 [2024-11-20 08:30:59.787746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.245 [2024-11-20 08:30:59.787784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:12.245 [2024-11-20 08:30:59.800451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166ea680 00:18:12.245 [2024-11-20 08:30:59.802324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.245 [2024-11-20 08:30:59.802352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:12.505 [2024-11-20 08:30:59.815207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e9e10 00:18:12.505 [2024-11-20 08:30:59.817046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.505 [2024-11-20 08:30:59.817080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:12.505 [2024-11-20 08:30:59.829556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e95a0 00:18:12.505 [2024-11-20 08:30:59.831232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.505 [2024-11-20 08:30:59.831264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:12.505 [2024-11-20 08:30:59.844907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e8d30 00:18:12.505 [2024-11-20 08:30:59.846773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.505 [2024-11-20 08:30:59.846971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:12.505 [2024-11-20 08:30:59.860981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e84c0 00:18:12.505 [2024-11-20 08:30:59.862659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.505 [2024-11-20 08:30:59.862855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:12.505 [2024-11-20 08:30:59.877177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e7c50 00:18:12.505 [2024-11-20 08:30:59.878921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.505 [2024-11-20 08:30:59.878983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:12.505 [2024-11-20 08:30:59.893386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e73e0 00:18:12.505 [2024-11-20 08:30:59.895093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.505 [2024-11-20 08:30:59.895128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:12.505 [2024-11-20 08:30:59.909651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e6b70 00:18:12.505 [2024-11-20 08:30:59.911409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.505 [2024-11-20 08:30:59.911461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:12.505 16067.00 IOPS, 62.76 MiB/s [2024-11-20T08:31:00.066Z] [2024-11-20 08:30:59.927763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e6300 00:18:12.505 [2024-11-20 08:30:59.929454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.505 [2024-11-20 08:30:59.929492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:12.505 [2024-11-20 08:30:59.944343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e5a90 00:18:12.505 [2024-11-20 08:30:59.946147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.505 [2024-11-20 08:30:59.946178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.505 [2024-11-20 08:30:59.960800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e5220 00:18:12.505 [2024-11-20 08:30:59.962432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.505 [2024-11-20 08:30:59.962468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:12.505 [2024-11-20 08:30:59.977154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e49b0 00:18:12.505 [2024-11-20 08:30:59.978696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.505 [2024-11-20 08:30:59.978738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:12.505 [2024-11-20 08:30:59.993555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e4140 00:18:12.505 [2024-11-20 08:30:59.995211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.505 [2024-11-20 08:30:59.995244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:12.505 [2024-11-20 08:31:00.009480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e38d0 00:18:12.505 [2024-11-20 08:31:00.010967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.505 [2024-11-20 08:31:00.011002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:12.505 [2024-11-20 08:31:00.025136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e3060 00:18:12.505 [2024-11-20 08:31:00.026595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.505 [2024-11-20 08:31:00.026627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:12.505 [2024-11-20 08:31:00.040831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e27f0 00:18:12.505 [2024-11-20 08:31:00.042408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.505 [2024-11-20 08:31:00.042436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:12.505 [2024-11-20 08:31:00.055422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e1f80 00:18:12.505 [2024-11-20 08:31:00.056849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.505 [2024-11-20 08:31:00.057028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:12.765 [2024-11-20 08:31:00.069797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e1710 00:18:12.765 [2024-11-20 08:31:00.071116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.765 [2024-11-20 08:31:00.071149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:12.765 [2024-11-20 08:31:00.084003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e0ea0 00:18:12.765 [2024-11-20 08:31:00.085419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.765 [2024-11-20 08:31:00.085458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:12.765 [2024-11-20 08:31:00.100156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e0630 00:18:12.765 [2024-11-20 08:31:00.101551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.765 [2024-11-20 08:31:00.101589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:12.765 [2024-11-20 08:31:00.116554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166dfdc0 00:18:12.765 [2024-11-20 08:31:00.118142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.765 [2024-11-20 08:31:00.118175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:12.765 [2024-11-20 08:31:00.132598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166df550 00:18:12.765 [2024-11-20 08:31:00.134050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.765 [2024-11-20 08:31:00.134082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:12.765 [2024-11-20 08:31:00.147920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166dece0 00:18:12.765 [2024-11-20 08:31:00.149470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.765 [2024-11-20 08:31:00.149506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:12.766 [2024-11-20 08:31:00.163521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166de470 00:18:12.766 [2024-11-20 08:31:00.164835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.766 [2024-11-20 08:31:00.165029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:12.766 [2024-11-20 08:31:00.184919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166ddc00 00:18:12.766 [2024-11-20 08:31:00.187391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.766 [2024-11-20 08:31:00.187426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.766 [2024-11-20 08:31:00.200978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166de470 00:18:12.766 [2024-11-20 08:31:00.203483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.766 [2024-11-20 08:31:00.203518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:12.766 [2024-11-20 08:31:00.216357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166dece0 00:18:12.766 [2024-11-20 08:31:00.218711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.766 [2024-11-20 08:31:00.218744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:12.766 [2024-11-20 08:31:00.231192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166df550 00:18:12.766 [2024-11-20 08:31:00.233723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.766 [2024-11-20 08:31:00.233758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:12.766 [2024-11-20 08:31:00.246205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166dfdc0 00:18:12.766 [2024-11-20 08:31:00.248501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.766 [2024-11-20 08:31:00.248660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:12.766 [2024-11-20 08:31:00.261944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e0630 00:18:12.766 [2024-11-20 08:31:00.264263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.766 [2024-11-20 08:31:00.264298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:12.766 [2024-11-20 08:31:00.276279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e0ea0 00:18:12.766 [2024-11-20 08:31:00.278645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.766 [2024-11-20 08:31:00.278677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:12.766 [2024-11-20 08:31:00.290692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e1710 00:18:12.766 [2024-11-20 08:31:00.293062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.766 [2024-11-20 08:31:00.293213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:12.766 [2024-11-20 08:31:00.305446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e1f80 00:18:12.766 [2024-11-20 08:31:00.307995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.766 [2024-11-20 08:31:00.308155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:12.766 [2024-11-20 08:31:00.322251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e27f0 00:18:13.025 [2024-11-20 08:31:00.324688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.025 [2024-11-20 08:31:00.324736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:13.025 [2024-11-20 08:31:00.338335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e3060 00:18:13.025 [2024-11-20 08:31:00.340759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.025 [2024-11-20 08:31:00.340798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:13.025 [2024-11-20 08:31:00.354305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e38d0 00:18:13.025 [2024-11-20 08:31:00.356622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.025 [2024-11-20 08:31:00.356673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:13.026 [2024-11-20 08:31:00.370423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e4140 00:18:13.026 [2024-11-20 08:31:00.372892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.026 [2024-11-20 08:31:00.373129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:13.026 [2024-11-20 08:31:00.386989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e49b0 00:18:13.026 [2024-11-20 08:31:00.389610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.026 [2024-11-20 08:31:00.389785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:13.026 [2024-11-20 08:31:00.403588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e5220 00:18:13.026 [2024-11-20 08:31:00.406105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.026 [2024-11-20 08:31:00.406273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:13.026 [2024-11-20 08:31:00.419124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e5a90 00:18:13.026 [2024-11-20 08:31:00.421412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.026 [2024-11-20 08:31:00.421570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:13.026 [2024-11-20 08:31:00.434639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e6300 00:18:13.026 [2024-11-20 08:31:00.437112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.026 [2024-11-20 08:31:00.437277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.026 [2024-11-20 08:31:00.450785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e6b70 00:18:13.026 [2024-11-20 08:31:00.453275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.026 [2024-11-20 08:31:00.453450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:13.026 [2024-11-20 08:31:00.467484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e73e0 00:18:13.026 [2024-11-20 08:31:00.469943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.026 [2024-11-20 08:31:00.470142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:13.026 [2024-11-20 08:31:00.484786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e7c50 00:18:13.026 [2024-11-20 08:31:00.487211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.026 [2024-11-20 08:31:00.487439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:13.026 [2024-11-20 08:31:00.502172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e84c0 00:18:13.026 [2024-11-20 08:31:00.504485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.026 [2024-11-20 08:31:00.504661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:13.026 [2024-11-20 08:31:00.518156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e8d30 00:18:13.026 [2024-11-20 08:31:00.520273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.026 [2024-11-20 08:31:00.520433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:13.026 [2024-11-20 08:31:00.533235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e95a0 00:18:13.026 [2024-11-20 08:31:00.535284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.026 [2024-11-20 08:31:00.535318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:13.026 [2024-11-20 08:31:00.548425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166e9e10 00:18:13.026 [2024-11-20 08:31:00.550541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.026 [2024-11-20 08:31:00.550570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:13.026 [2024-11-20 08:31:00.564191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166ea680 00:18:13.026 [2024-11-20 08:31:00.566124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.026 [2024-11-20 08:31:00.566155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:13.026 [2024-11-20 08:31:00.579299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166eaef0 00:18:13.026 [2024-11-20 08:31:00.581293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.026 [2024-11-20 08:31:00.581326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:13.285 [2024-11-20 08:31:00.594288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166eb760 00:18:13.285 [2024-11-20 08:31:00.596284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.285 [2024-11-20 08:31:00.596462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:13.285 [2024-11-20 08:31:00.610019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166ebfd0 00:18:13.285 [2024-11-20 08:31:00.612108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.285 [2024-11-20 08:31:00.612144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:13.285 [2024-11-20 08:31:00.625379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166ec840 00:18:13.285 [2024-11-20 08:31:00.627458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.285 [2024-11-20 08:31:00.627492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:13.285 [2024-11-20 08:31:00.640674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166ed0b0 00:18:13.285 [2024-11-20 08:31:00.642906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.285 [2024-11-20 08:31:00.642937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:13.285 [2024-11-20 08:31:00.656417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166ed920 00:18:13.285 [2024-11-20 08:31:00.658379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.285 [2024-11-20 08:31:00.658406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:13.285 [2024-11-20 08:31:00.671113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166ee190 00:18:13.285 [2024-11-20 08:31:00.673035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.285 [2024-11-20 08:31:00.673072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:13.285 [2024-11-20 08:31:00.686098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166eea00 00:18:13.285 [2024-11-20 08:31:00.687900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.285 [2024-11-20 08:31:00.688099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.285 [2024-11-20 08:31:00.700721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166ef270 00:18:13.285 [2024-11-20 08:31:00.702629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.285 [2024-11-20 08:31:00.702661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:13.285 [2024-11-20 08:31:00.715294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166efae0 00:18:13.285 [2024-11-20 08:31:00.717052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.285 [2024-11-20 08:31:00.717085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:13.285 [2024-11-20 08:31:00.731160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f0350 00:18:13.285 [2024-11-20 08:31:00.733017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.285 [2024-11-20 08:31:00.733050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:13.285 [2024-11-20 08:31:00.746850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f0bc0 00:18:13.285 [2024-11-20 08:31:00.748772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.285 [2024-11-20 08:31:00.748853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:13.285 [2024-11-20 08:31:00.763032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f1430 00:18:13.285 [2024-11-20 08:31:00.764893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.285 [2024-11-20 08:31:00.765103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:13.285 [2024-11-20 08:31:00.778857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f1ca0 00:18:13.285 [2024-11-20 08:31:00.780789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.285 [2024-11-20 08:31:00.781008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:13.285 [2024-11-20 08:31:00.795191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f2510 00:18:13.285 [2024-11-20 08:31:00.796992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.285 [2024-11-20 08:31:00.797028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:13.285 [2024-11-20 08:31:00.811505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f2d80 00:18:13.285 [2024-11-20 08:31:00.813307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.285 [2024-11-20 08:31:00.813339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:13.285 [2024-11-20 08:31:00.827049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f35f0 00:18:13.285 [2024-11-20 08:31:00.828714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.285 [2024-11-20 08:31:00.828919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:13.285 [2024-11-20 08:31:00.841635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f3e60 00:18:13.285 [2024-11-20 08:31:00.843353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.285 [2024-11-20 08:31:00.843385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:13.543 [2024-11-20 08:31:00.857237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f46d0 00:18:13.543 [2024-11-20 08:31:00.858908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.543 [2024-11-20 08:31:00.858944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:13.543 [2024-11-20 08:31:00.871917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f4f40 00:18:13.543 [2024-11-20 08:31:00.873701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.543 [2024-11-20 08:31:00.873734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:13.543 [2024-11-20 08:31:00.886404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f57b0 00:18:13.543 [2024-11-20 08:31:00.888038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.543 [2024-11-20 08:31:00.888218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:13.543 [2024-11-20 08:31:00.901268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f6020 00:18:13.543 [2024-11-20 08:31:00.902756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.543 [2024-11-20 08:31:00.902790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:13.543 16130.00 IOPS, 63.01 MiB/s [2024-11-20T08:31:01.105Z] [2024-11-20 08:31:00.916165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c5b0) with pdu=0x2000166f6890 00:18:13.544 [2024-11-20 08:31:00.917700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.544 [2024-11-20 08:31:00.917733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:13.544 00:18:13.544 Latency(us) 00:18:13.544 [2024-11-20T08:31:01.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.544 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:13.544 nvme0n1 : 2.01 16134.65 63.03 0.00 0.00 7926.54 5391.83 29789.09 00:18:13.544 [2024-11-20T08:31:01.105Z] =================================================================================================================== 00:18:13.544 [2024-11-20T08:31:01.105Z] Total : 16134.65 63.03 0.00 0.00 7926.54 5391.83 29789.09 00:18:13.544 { 00:18:13.544 "results": [ 00:18:13.544 { 00:18:13.544 "job": "nvme0n1", 00:18:13.544 "core_mask": "0x2", 00:18:13.544 "workload": "randwrite", 00:18:13.544 "status": "finished", 00:18:13.544 "queue_depth": 128, 00:18:13.544 "io_size": 4096, 00:18:13.544 "runtime": 2.007357, 00:18:13.544 "iops": 16134.648694776266, 00:18:13.544 "mibps": 63.02597146396979, 00:18:13.544 "io_failed": 0, 00:18:13.544 "io_timeout": 0, 00:18:13.544 "avg_latency_us": 7926.539724027978, 00:18:13.544 "min_latency_us": 5391.825454545455, 00:18:13.544 "max_latency_us": 29789.090909090908 00:18:13.544 } 00:18:13.544 ], 00:18:13.544 "core_count": 1 00:18:13.544 } 00:18:13.544 08:31:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:13.544 08:31:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:13.544 08:31:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:13.544 08:31:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:13.544 | .driver_specific 00:18:13.544 | .nvme_error 00:18:13.544 | .status_code 00:18:13.544 | .command_transient_transport_error' 00:18:13.803 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 127 > 0 )) 00:18:13.803 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80508 00:18:13.803 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' -z 80508 ']' 00:18:13.803 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill -0 80508 00:18:13.803 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # uname 00:18:13.803 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:18:13.803 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 80508 00:18:13.803 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:18:13.803 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:18:13.803 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@975 -- # echo 'killing process with pid 80508' 00:18:13.803 killing process with pid 80508 00:18:13.803 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # kill 80508 00:18:13.803 Received shutdown signal, test time was about 2.000000 seconds 00:18:13.803 00:18:13.803 Latency(us) 00:18:13.803 [2024-11-20T08:31:01.364Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.803 [2024-11-20T08:31:01.364Z] =================================================================================================================== 00:18:13.803 [2024-11-20T08:31:01.364Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.803 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@981 -- # wait 80508 00:18:14.062 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:14.063 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:14.063 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:14.063 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:14.063 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:14.063 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:14.063 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80555 00:18:14.063 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80555 /var/tmp/bperf.sock 00:18:14.063 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # '[' -z 80555 ']' 00:18:14.063 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:14.063 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@843 -- # local max_retries=100 00:18:14.063 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:14.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:14.063 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@847 -- # xtrace_disable 00:18:14.063 08:31:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:14.063 [2024-11-20 08:31:01.507954] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:18:14.063 [2024-11-20 08:31:01.508299] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80555 ] 00:18:14.063 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:14.063 Zero copy mechanism will not be used. 00:18:14.322 [2024-11-20 08:31:01.653281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.322 [2024-11-20 08:31:01.708419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.322 [2024-11-20 08:31:01.763105] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:15.259 08:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:18:15.259 08:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@871 -- # return 0 00:18:15.259 08:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:15.259 08:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:15.259 08:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:15.259 08:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@566 -- # xtrace_disable 00:18:15.259 08:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:15.259 08:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:18:15.259 08:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:15.259 08:31:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:15.518 nvme0n1 00:18:15.518 08:31:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:15.519 08:31:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@566 -- # xtrace_disable 00:18:15.519 08:31:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:15.779 08:31:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:18:15.779 08:31:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:15.779 08:31:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:15.779 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:15.779 Zero copy mechanism will not be used. 00:18:15.779 Running I/O for 2 seconds... 00:18:15.779 [2024-11-20 08:31:03.214351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.779 [2024-11-20 08:31:03.214452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.779 [2024-11-20 08:31:03.214480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.779 [2024-11-20 08:31:03.220045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.779 [2024-11-20 08:31:03.220125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.779 [2024-11-20 08:31:03.220148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.779 [2024-11-20 08:31:03.225705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.779 [2024-11-20 08:31:03.225792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.779 [2024-11-20 08:31:03.225815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.779 [2024-11-20 08:31:03.230728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.779 [2024-11-20 08:31:03.230818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.779 [2024-11-20 08:31:03.230839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.779 [2024-11-20 08:31:03.236110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.779 [2024-11-20 08:31:03.236227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.779 [2024-11-20 08:31:03.236258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.779 [2024-11-20 08:31:03.241601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.779 [2024-11-20 08:31:03.241688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.779 [2024-11-20 08:31:03.241710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.779 [2024-11-20 08:31:03.246680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.779 [2024-11-20 08:31:03.246766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.779 [2024-11-20 08:31:03.246789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.779 [2024-11-20 08:31:03.251823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.779 [2024-11-20 08:31:03.251984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.779 [2024-11-20 08:31:03.252004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.779 [2024-11-20 08:31:03.256676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.779 [2024-11-20 08:31:03.256991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.779 [2024-11-20 08:31:03.257013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.779 [2024-11-20 08:31:03.261618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.779 [2024-11-20 08:31:03.261720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.779 [2024-11-20 08:31:03.261740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.779 [2024-11-20 08:31:03.266366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.779 [2024-11-20 08:31:03.266451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.779 [2024-11-20 08:31:03.266471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.779 [2024-11-20 08:31:03.271062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.779 [2024-11-20 08:31:03.271164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.779 [2024-11-20 08:31:03.271184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.779 [2024-11-20 08:31:03.275764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.780 [2024-11-20 08:31:03.275882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.780 [2024-11-20 08:31:03.275904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.780 [2024-11-20 08:31:03.280466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.780 [2024-11-20 08:31:03.280720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.780 [2024-11-20 08:31:03.280741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.780 [2024-11-20 08:31:03.285414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.780 [2024-11-20 08:31:03.285497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.780 [2024-11-20 08:31:03.285517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.780 [2024-11-20 08:31:03.290127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.780 [2024-11-20 08:31:03.290226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.780 [2024-11-20 08:31:03.290246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.780 [2024-11-20 08:31:03.294751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.780 [2024-11-20 08:31:03.294897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.780 [2024-11-20 08:31:03.294935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.780 [2024-11-20 08:31:03.299894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.780 [2024-11-20 08:31:03.299969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.780 [2024-11-20 08:31:03.299991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.780 [2024-11-20 08:31:03.304968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.780 [2024-11-20 08:31:03.305104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.780 [2024-11-20 08:31:03.305127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.780 [2024-11-20 08:31:03.309761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.780 [2024-11-20 08:31:03.309948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.780 [2024-11-20 08:31:03.309971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.780 [2024-11-20 08:31:03.314998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.780 [2024-11-20 08:31:03.315099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.780 [2024-11-20 08:31:03.315121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.780 [2024-11-20 08:31:03.320048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.780 [2024-11-20 08:31:03.320143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.780 [2024-11-20 08:31:03.320164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.780 [2024-11-20 08:31:03.324757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.780 [2024-11-20 08:31:03.324917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.780 [2024-11-20 08:31:03.324938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.780 [2024-11-20 08:31:03.329541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.780 [2024-11-20 08:31:03.329634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.780 [2024-11-20 08:31:03.329654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.780 [2024-11-20 08:31:03.334258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:15.780 [2024-11-20 08:31:03.334341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.780 [2024-11-20 08:31:03.334361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.040 [2024-11-20 08:31:03.338960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.040 [2024-11-20 08:31:03.339067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.040 [2024-11-20 08:31:03.339087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.040 [2024-11-20 08:31:03.343730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.040 [2024-11-20 08:31:03.343828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.343883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.348438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.348532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.348553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.353102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.353187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.353207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.357698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.357781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.357800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.362482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.362707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.362727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.367416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.367515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.367536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.372195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.372278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.372297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.376714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.376799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.376819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.381718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.381805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.381826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.386980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.387079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.387101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.392252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.392365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.392386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.397795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.397982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.398016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.403256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.403340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.403360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.408600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.408683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.408705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.413990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.414065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.414088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.419315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.419406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.419428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.424746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.424864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.424888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.430369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.430453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.430473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.435658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.435734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.435756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.440768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.440927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.440949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.446045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.446132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.446154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.451134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.451208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.451230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.456436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.456537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.456557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.461292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.461522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.461543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.466180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.466398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.466588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.471433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.041 [2024-11-20 08:31:03.471724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.041 [2024-11-20 08:31:03.471994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.041 [2024-11-20 08:31:03.476593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.476860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.477100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.042 [2024-11-20 08:31:03.481505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.481745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.481964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.042 [2024-11-20 08:31:03.486547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.486801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.487073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.042 [2024-11-20 08:31:03.491508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.491765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.492040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.042 [2024-11-20 08:31:03.496638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.496923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.497199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.042 [2024-11-20 08:31:03.501528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.501760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.501955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.042 [2024-11-20 08:31:03.506547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.506795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.506980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.042 [2024-11-20 08:31:03.511729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.511818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.511873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.042 [2024-11-20 08:31:03.516582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.516677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.516699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.042 [2024-11-20 08:31:03.521362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.521646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.521669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.042 [2024-11-20 08:31:03.526317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.526403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.526424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.042 [2024-11-20 08:31:03.531549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.531675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.531697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.042 [2024-11-20 08:31:03.536629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.536725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.536746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.042 [2024-11-20 08:31:03.541884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.542153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.542175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.042 [2024-11-20 08:31:03.547346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.547431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.547454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.042 [2024-11-20 08:31:03.552765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.552868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.552891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.042 [2024-11-20 08:31:03.558130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.558205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.558228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.042 [2024-11-20 08:31:03.563653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.563727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.563749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.042 [2024-11-20 08:31:03.568892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.569065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.569088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.042 [2024-11-20 08:31:03.574042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.574133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.574170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.042 [2024-11-20 08:31:03.579160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.579291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.579310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.042 [2024-11-20 08:31:03.584525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.584600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.584623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.042 [2024-11-20 08:31:03.589640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.589856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.589879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.042 [2024-11-20 08:31:03.594668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.042 [2024-11-20 08:31:03.594754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.042 [2024-11-20 08:31:03.594775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.599620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.599725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.599747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.604792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.604883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.604906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.609760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.610027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.610047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.615093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.615199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.615220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.620232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.620333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.620354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.625166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.625272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.625292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.630522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.630620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.630642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.635674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.635775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.635798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.640922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.641208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.641231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.646097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.646191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.646211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.651138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.651222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.651243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.656270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.656532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.656555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.661237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.661320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.661341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.666004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.666097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.666117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.670830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.670944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.670965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.675633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.675738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.675759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.680279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.680523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.680544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.685359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.685444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.685480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.690211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.690295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.690315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.694920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.695019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.695039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.699685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.699776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.699798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.704503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.704587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.704608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.709309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.709411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.709431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.714146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.714266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.714294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.719144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.719230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.719250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.723797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.723914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.303 [2024-11-20 08:31:03.723936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.303 [2024-11-20 08:31:03.728523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.303 [2024-11-20 08:31:03.728607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.728628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.733603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.733676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.733706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.738682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.738885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.738908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.744060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.744155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.744192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.748935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.749033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.749054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.753601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.753695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.753715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.758563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.758804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.758826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.763562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.763678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.763698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.768380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.768465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.768485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.773379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.773465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.773485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.778329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.778569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.778591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.783587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.783707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.783729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.788549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.788633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.788653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.793475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.793574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.793594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.798084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.798168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.798188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.802694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.802940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.802961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.807512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.807623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.807643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.812102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.812200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.812220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.817001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.817075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.817097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.822166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.822267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.822289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.827342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.827439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.827459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.832520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.832618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.832640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.837635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.837721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.837742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.842608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.842871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.842920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.847821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.847951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.847988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.852803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.852949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.852970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.304 [2024-11-20 08:31:03.857810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.304 [2024-11-20 08:31:03.857906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.304 [2024-11-20 08:31:03.857927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.862590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.862835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.862869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.867537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.867698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.867722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.872416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.872500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.872520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.877491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.877594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.877615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.882772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.883078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.883099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.888083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.888168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.888188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.893193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.893285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.893306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.898561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.898826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.898849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.904067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.904178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.904198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.909398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.909527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.909563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.914738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.915042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.915064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.920171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.920263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.920283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.925156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.925241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.925261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.930121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.930203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.930223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.934978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.935071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.935090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.939620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.939726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.939750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.944366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.944459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.944478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.948958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.949052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.949072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.953533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.953617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.953637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.958316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.958568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.958589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.963181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.963263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.963282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.967895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.968006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.968026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.973201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.973275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.973296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.978459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.978658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.978681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.983343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.983427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.983446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.988465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.564 [2024-11-20 08:31:03.988540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.564 [2024-11-20 08:31:03.988563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.564 [2024-11-20 08:31:03.993511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:03.993608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:03.993647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:03.998392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:03.998632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:03.998654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.003453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.003541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.003562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.008133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.008232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.008252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.012692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.012777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.012797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.017336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.017419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.017439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.021933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.022018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.022037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.026458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.026541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.026560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.031018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.031102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.031122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.035697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.035783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.035804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.040370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.040463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.040483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.045194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.045295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.045315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.050072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.050157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.050177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.054919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.055044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.055064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.060238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.060336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.060358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.065393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.065642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.065665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.071010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.071106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.071127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.076483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.076571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.076594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.081830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.082134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.082157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.087533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.087618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.087640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.093005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.093099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.093120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.098143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.098228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.098249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.103220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.103320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.103358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.108541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.108823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.108861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.113978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.114073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.114093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.565 [2024-11-20 08:31:04.118951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.565 [2024-11-20 08:31:04.119037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.565 [2024-11-20 08:31:04.119057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.124075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.124148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.124171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.129195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.129282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.129303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.134025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.134109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.134130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.138922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.139017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.139038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.143480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.143563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.143583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.148175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.148269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.148289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.152865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.152983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.153004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.157832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.157915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.157935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.162534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.162628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.162648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.167356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.167441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.167461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.172084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.172194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.172214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.177063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.177148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.177168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.181686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.181770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.181789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.186497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.186600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.186621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.191539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.191653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.191674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.196464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.196727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.196749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.201796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.201904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.201928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.206869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.206985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.207006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.211569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.211710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.211731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.824 6179.00 IOPS, 772.38 MiB/s [2024-11-20T08:31:04.385Z] [2024-11-20 08:31:04.217041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.217303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.217324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.221585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.221784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.221804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.226076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.226259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.226279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.230189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.230279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.230299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.234351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.234430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.234455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.238751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.824 [2024-11-20 08:31:04.238875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.824 [2024-11-20 08:31:04.238897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.824 [2024-11-20 08:31:04.243125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.243209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.243230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.247297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.247381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.247401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.251522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.251674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.251696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.256140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.256351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.256380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.260511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.260609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.260629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.264956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.265036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.265057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.269592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.269685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.269707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.274027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.274145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.274166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.278221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.278314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.278334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.282345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.282432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.282452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.286437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.286631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.286668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.290794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.291050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.291080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.295143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.295312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.295349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.299228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.299453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.299474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.303435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.303649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.303675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.307514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.307705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.307725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.311680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.311959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.311998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.315875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.315987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.316006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.319975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.320067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.320086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.323988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.324079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.324098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.328134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.328212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.328231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.332215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.332307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.332327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.336247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.336360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.336380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.340265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.340430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.340450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.344497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.344668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.344687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.348776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.348982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.349003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.352974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.353102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.353123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.357144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.357345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.357364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.361362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.361473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.361493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.365670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.365852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.365888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.370135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.370269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.370289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.374316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.374542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.374563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.825 [2024-11-20 08:31:04.378811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:16.825 [2024-11-20 08:31:04.378971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.825 [2024-11-20 08:31:04.378990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.085 [2024-11-20 08:31:04.382954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.085 [2024-11-20 08:31:04.383035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-11-20 08:31:04.383055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.085 [2024-11-20 08:31:04.387150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.085 [2024-11-20 08:31:04.387233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-11-20 08:31:04.387253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.085 [2024-11-20 08:31:04.391524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.085 [2024-11-20 08:31:04.391647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-11-20 08:31:04.391669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.085 [2024-11-20 08:31:04.396256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.085 [2024-11-20 08:31:04.396340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-11-20 08:31:04.396363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.085 [2024-11-20 08:31:04.400874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.085 [2024-11-20 08:31:04.401037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-11-20 08:31:04.401059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.085 [2024-11-20 08:31:04.405510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.085 [2024-11-20 08:31:04.405630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-11-20 08:31:04.405652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.085 [2024-11-20 08:31:04.410333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.085 [2024-11-20 08:31:04.410429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-11-20 08:31:04.410451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.085 [2024-11-20 08:31:04.415171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.085 [2024-11-20 08:31:04.415281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-11-20 08:31:04.415302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.085 [2024-11-20 08:31:04.420193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.085 [2024-11-20 08:31:04.420276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-11-20 08:31:04.420311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.085 [2024-11-20 08:31:04.425063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.085 [2024-11-20 08:31:04.425180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-11-20 08:31:04.425207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.085 [2024-11-20 08:31:04.429628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.085 [2024-11-20 08:31:04.429723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-11-20 08:31:04.429745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.085 [2024-11-20 08:31:04.434304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.085 [2024-11-20 08:31:04.434424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-11-20 08:31:04.434444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.085 [2024-11-20 08:31:04.438877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.085 [2024-11-20 08:31:04.439287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-11-20 08:31:04.439308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.085 [2024-11-20 08:31:04.443472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.085 [2024-11-20 08:31:04.443623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-11-20 08:31:04.443645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.085 [2024-11-20 08:31:04.448192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.085 [2024-11-20 08:31:04.448269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-11-20 08:31:04.448289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.085 [2024-11-20 08:31:04.452742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.085 [2024-11-20 08:31:04.452874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-11-20 08:31:04.452913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.085 [2024-11-20 08:31:04.457139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.085 [2024-11-20 08:31:04.457246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-11-20 08:31:04.457265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.085 [2024-11-20 08:31:04.461480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.085 [2024-11-20 08:31:04.461570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-11-20 08:31:04.461589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.085 [2024-11-20 08:31:04.465679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.085 [2024-11-20 08:31:04.465759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-11-20 08:31:04.465779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.085 [2024-11-20 08:31:04.469779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.085 [2024-11-20 08:31:04.469908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-11-20 08:31:04.469928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.086 [2024-11-20 08:31:04.473949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.086 [2024-11-20 08:31:04.474104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-11-20 08:31:04.474124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.086 [2024-11-20 08:31:04.477929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.086 [2024-11-20 08:31:04.478127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-11-20 08:31:04.478147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.086 [2024-11-20 08:31:04.482292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.086 [2024-11-20 08:31:04.482379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-11-20 08:31:04.482398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.086 [2024-11-20 08:31:04.486745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.086 [2024-11-20 08:31:04.486858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-11-20 08:31:04.486892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.086 [2024-11-20 08:31:04.491414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.086 [2024-11-20 08:31:04.491505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-11-20 08:31:04.491526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.086 [2024-11-20 08:31:04.496170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.086 [2024-11-20 08:31:04.496261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-11-20 08:31:04.496282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.086 [2024-11-20 08:31:04.500627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.086 [2024-11-20 08:31:04.500695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-11-20 08:31:04.500721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.086 [2024-11-20 08:31:04.505208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.086 [2024-11-20 08:31:04.505330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-11-20 08:31:04.505350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.086 [2024-11-20 08:31:04.509665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.086 [2024-11-20 08:31:04.509942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-11-20 08:31:04.509965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.086 [2024-11-20 08:31:04.514379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.086 [2024-11-20 08:31:04.514598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-11-20 08:31:04.514624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.086 [2024-11-20 08:31:04.518576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.086 [2024-11-20 08:31:04.518658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-11-20 08:31:04.518679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.086 [2024-11-20 08:31:04.522902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.086 [2024-11-20 08:31:04.523004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-11-20 08:31:04.523027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.086 [2024-11-20 08:31:04.527150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.086 [2024-11-20 08:31:04.527250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-11-20 08:31:04.527270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.086 [2024-11-20 08:31:04.531254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.086 [2024-11-20 08:31:04.531334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-11-20 08:31:04.531355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.086 [2024-11-20 08:31:04.535483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.086 [2024-11-20 08:31:04.535584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-11-20 08:31:04.535643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.086 [2024-11-20 08:31:04.540385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.086 [2024-11-20 08:31:04.540491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-11-20 08:31:04.540511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.086 [2024-11-20 08:31:04.544946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.086 [2024-11-20 08:31:04.545111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-11-20 08:31:04.545132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.086 [2024-11-20 08:31:04.549441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.086 [2024-11-20 08:31:04.549692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-11-20 08:31:04.549714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.086 [2024-11-20 08:31:04.554409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.086 [2024-11-20 08:31:04.554613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-11-20 08:31:04.554641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.086 [2024-11-20 08:31:04.559229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.086 [2024-11-20 08:31:04.559333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-11-20 08:31:04.559354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.086 [2024-11-20 08:31:04.563643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.086 [2024-11-20 08:31:04.563729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-11-20 08:31:04.563751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.086 [2024-11-20 08:31:04.568640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.086 [2024-11-20 08:31:04.568716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-11-20 08:31:04.568739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.086 [2024-11-20 08:31:04.573424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.087 [2024-11-20 08:31:04.573616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-11-20 08:31:04.573639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.087 [2024-11-20 08:31:04.578290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.087 [2024-11-20 08:31:04.578406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-11-20 08:31:04.578428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.087 [2024-11-20 08:31:04.583136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.087 [2024-11-20 08:31:04.583278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-11-20 08:31:04.583301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.087 [2024-11-20 08:31:04.587908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.087 [2024-11-20 08:31:04.587977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-11-20 08:31:04.588000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.087 [2024-11-20 08:31:04.592821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.087 [2024-11-20 08:31:04.592964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-11-20 08:31:04.592987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.087 [2024-11-20 08:31:04.597566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.087 [2024-11-20 08:31:04.597894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-11-20 08:31:04.597921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.087 [2024-11-20 08:31:04.602481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.087 [2024-11-20 08:31:04.602577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-11-20 08:31:04.602599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.087 [2024-11-20 08:31:04.607261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.087 [2024-11-20 08:31:04.607364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-11-20 08:31:04.607389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.087 [2024-11-20 08:31:04.611918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.087 [2024-11-20 08:31:04.612000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-11-20 08:31:04.612032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.087 [2024-11-20 08:31:04.616609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.087 [2024-11-20 08:31:04.616711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-11-20 08:31:04.616733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.087 [2024-11-20 08:31:04.621337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.087 [2024-11-20 08:31:04.621428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-11-20 08:31:04.621449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.087 [2024-11-20 08:31:04.625928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.087 [2024-11-20 08:31:04.626009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-11-20 08:31:04.626032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.087 [2024-11-20 08:31:04.630126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.087 [2024-11-20 08:31:04.630211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-11-20 08:31:04.630231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.087 [2024-11-20 08:31:04.634720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.087 [2024-11-20 08:31:04.634814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-11-20 08:31:04.634851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.087 [2024-11-20 08:31:04.639161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.087 [2024-11-20 08:31:04.639256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-11-20 08:31:04.639276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.347 [2024-11-20 08:31:04.643565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.347 [2024-11-20 08:31:04.643660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.347 [2024-11-20 08:31:04.643682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.347 [2024-11-20 08:31:04.648112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.347 [2024-11-20 08:31:04.648214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.347 [2024-11-20 08:31:04.648236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.347 [2024-11-20 08:31:04.652662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.347 [2024-11-20 08:31:04.652756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.347 [2024-11-20 08:31:04.652776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.347 [2024-11-20 08:31:04.657531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.347 [2024-11-20 08:31:04.657779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.347 [2024-11-20 08:31:04.657801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.347 [2024-11-20 08:31:04.662707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.347 [2024-11-20 08:31:04.662793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.347 [2024-11-20 08:31:04.662815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.347 [2024-11-20 08:31:04.667264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.347 [2024-11-20 08:31:04.667509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.347 [2024-11-20 08:31:04.667539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.347 [2024-11-20 08:31:04.671688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.347 [2024-11-20 08:31:04.671837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.347 [2024-11-20 08:31:04.671860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.347 [2024-11-20 08:31:04.676315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.347 [2024-11-20 08:31:04.676399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.347 [2024-11-20 08:31:04.676420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.347 [2024-11-20 08:31:04.681039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.347 [2024-11-20 08:31:04.681111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.347 [2024-11-20 08:31:04.681134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.347 [2024-11-20 08:31:04.685622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.347 [2024-11-20 08:31:04.685849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.347 [2024-11-20 08:31:04.685871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.347 [2024-11-20 08:31:04.690404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.347 [2024-11-20 08:31:04.690500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.347 [2024-11-20 08:31:04.690521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.347 [2024-11-20 08:31:04.695488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.347 [2024-11-20 08:31:04.695557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.347 [2024-11-20 08:31:04.695579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.347 [2024-11-20 08:31:04.700131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.347 [2024-11-20 08:31:04.700219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.347 [2024-11-20 08:31:04.700241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.347 [2024-11-20 08:31:04.704688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.347 [2024-11-20 08:31:04.704854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.347 [2024-11-20 08:31:04.704890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.347 [2024-11-20 08:31:04.709437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.347 [2024-11-20 08:31:04.709782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.347 [2024-11-20 08:31:04.709805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.347 [2024-11-20 08:31:04.714278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.347 [2024-11-20 08:31:04.714481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.347 [2024-11-20 08:31:04.714508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.347 [2024-11-20 08:31:04.718747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.347 [2024-11-20 08:31:04.718865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.347 [2024-11-20 08:31:04.718902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.347 [2024-11-20 08:31:04.723300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.347 [2024-11-20 08:31:04.723379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.347 [2024-11-20 08:31:04.723401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.347 [2024-11-20 08:31:04.727889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.347 [2024-11-20 08:31:04.728012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.347 [2024-11-20 08:31:04.728033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.347 [2024-11-20 08:31:04.732393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.347 [2024-11-20 08:31:04.732477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.347 [2024-11-20 08:31:04.732498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.347 [2024-11-20 08:31:04.737023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.347 [2024-11-20 08:31:04.737106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.347 [2024-11-20 08:31:04.737126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.347 [2024-11-20 08:31:04.741575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.347 [2024-11-20 08:31:04.741815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.347 [2024-11-20 08:31:04.741838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.347 [2024-11-20 08:31:04.746135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.746320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.746341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.750288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.750466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.750487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.754495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.754590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.754611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.759020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.759102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.759124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.763738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.763834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.763858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.768383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.768474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.768497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.773207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.773304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.773325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.777795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.778147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.778177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.782999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.783073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.783096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.787773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.788030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.788063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.792597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.792711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.792764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.797438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.797671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.797693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.802154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.802258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.802285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.806711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.806790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.806812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.811442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.811533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.811555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.816298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.816394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.816417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.820971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.821069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.821090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.825718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.825808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.825830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.830657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.830756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.830778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.835179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.835254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.835276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.839702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.839895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.839929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.844160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.844245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.844267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.848691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.848928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.848951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.853399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.853497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.853519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.857936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.858039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.858061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.862417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.862508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.348 [2024-11-20 08:31:04.862530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.348 [2024-11-20 08:31:04.867010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.348 [2024-11-20 08:31:04.867110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.349 [2024-11-20 08:31:04.867132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.349 [2024-11-20 08:31:04.871402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.349 [2024-11-20 08:31:04.871500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.349 [2024-11-20 08:31:04.871522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.349 [2024-11-20 08:31:04.875896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.349 [2024-11-20 08:31:04.876076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.349 [2024-11-20 08:31:04.876096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.349 [2024-11-20 08:31:04.880191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.349 [2024-11-20 08:31:04.880378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.349 [2024-11-20 08:31:04.880403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.349 [2024-11-20 08:31:04.884532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.349 [2024-11-20 08:31:04.884788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.349 [2024-11-20 08:31:04.884810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.349 [2024-11-20 08:31:04.889393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.349 [2024-11-20 08:31:04.889483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.349 [2024-11-20 08:31:04.889504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.349 [2024-11-20 08:31:04.893789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.349 [2024-11-20 08:31:04.893913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.349 [2024-11-20 08:31:04.893935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.349 [2024-11-20 08:31:04.898463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.349 [2024-11-20 08:31:04.898562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.349 [2024-11-20 08:31:04.898584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.349 [2024-11-20 08:31:04.902972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.349 [2024-11-20 08:31:04.903055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.349 [2024-11-20 08:31:04.903078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.608 [2024-11-20 08:31:04.907469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.608 [2024-11-20 08:31:04.907567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.608 [2024-11-20 08:31:04.907589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.608 [2024-11-20 08:31:04.912162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.608 [2024-11-20 08:31:04.912259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.608 [2024-11-20 08:31:04.912280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.608 [2024-11-20 08:31:04.916745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.608 [2024-11-20 08:31:04.916991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.608 [2024-11-20 08:31:04.917013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.608 [2024-11-20 08:31:04.921471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.608 [2024-11-20 08:31:04.921623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.608 [2024-11-20 08:31:04.921646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.608 [2024-11-20 08:31:04.925974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.608 [2024-11-20 08:31:04.926139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.608 [2024-11-20 08:31:04.926161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.608 [2024-11-20 08:31:04.930471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.608 [2024-11-20 08:31:04.930545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.608 [2024-11-20 08:31:04.930568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.608 [2024-11-20 08:31:04.935112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.608 [2024-11-20 08:31:04.935203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.608 [2024-11-20 08:31:04.935225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.608 [2024-11-20 08:31:04.939724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.608 [2024-11-20 08:31:04.939819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.608 [2024-11-20 08:31:04.939841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.608 [2024-11-20 08:31:04.944396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.608 [2024-11-20 08:31:04.944638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.608 [2024-11-20 08:31:04.944660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.608 [2024-11-20 08:31:04.949157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.608 [2024-11-20 08:31:04.949237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.608 [2024-11-20 08:31:04.949260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.608 [2024-11-20 08:31:04.953788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.608 [2024-11-20 08:31:04.953946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.608 [2024-11-20 08:31:04.953968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.608 [2024-11-20 08:31:04.958329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.608 [2024-11-20 08:31:04.958439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.608 [2024-11-20 08:31:04.958460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.608 [2024-11-20 08:31:04.963015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.608 [2024-11-20 08:31:04.963139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.608 [2024-11-20 08:31:04.963175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.608 [2024-11-20 08:31:04.967483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.608 [2024-11-20 08:31:04.967581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.608 [2024-11-20 08:31:04.967631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.608 [2024-11-20 08:31:04.972223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.608 [2024-11-20 08:31:04.972378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.608 [2024-11-20 08:31:04.972399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.608 [2024-11-20 08:31:04.976676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.608 [2024-11-20 08:31:04.976940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.608 [2024-11-20 08:31:04.976962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.608 [2024-11-20 08:31:04.981067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.608 [2024-11-20 08:31:04.981244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.608 [2024-11-20 08:31:04.981265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.608 [2024-11-20 08:31:04.985187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.608 [2024-11-20 08:31:04.985267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.608 [2024-11-20 08:31:04.985287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.608 [2024-11-20 08:31:04.989427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.609 [2024-11-20 08:31:04.989525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.609 [2024-11-20 08:31:04.989562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.609 [2024-11-20 08:31:04.994241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.609 [2024-11-20 08:31:04.994348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.609 [2024-11-20 08:31:04.994368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.609 [2024-11-20 08:31:04.998871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.609 [2024-11-20 08:31:04.998982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.609 [2024-11-20 08:31:04.999005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.609 [2024-11-20 08:31:05.003356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.609 [2024-11-20 08:31:05.003430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.609 [2024-11-20 08:31:05.003452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.609 [2024-11-20 08:31:05.008033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.609 [2024-11-20 08:31:05.008118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.609 [2024-11-20 08:31:05.008140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.609 [2024-11-20 08:31:05.012661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.609 [2024-11-20 08:31:05.012885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.609 [2024-11-20 08:31:05.012921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.609 [2024-11-20 08:31:05.017489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.609 [2024-11-20 08:31:05.017610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.609 [2024-11-20 08:31:05.017632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.609 [2024-11-20 08:31:05.022033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.609 [2024-11-20 08:31:05.022218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.609 [2024-11-20 08:31:05.022240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.609 [2024-11-20 08:31:05.026746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.609 [2024-11-20 08:31:05.026844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.609 [2024-11-20 08:31:05.026866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.609 [2024-11-20 08:31:05.031264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.609 [2024-11-20 08:31:05.031336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.609 [2024-11-20 08:31:05.031358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.609 [2024-11-20 08:31:05.035769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.609 [2024-11-20 08:31:05.035885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.609 [2024-11-20 08:31:05.035908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.609 [2024-11-20 08:31:05.040297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.609 [2024-11-20 08:31:05.040543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.609 [2024-11-20 08:31:05.040564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.609 [2024-11-20 08:31:05.045079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.609 [2024-11-20 08:31:05.045158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.609 [2024-11-20 08:31:05.045180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.609 [2024-11-20 08:31:05.049587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.609 [2024-11-20 08:31:05.049676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.609 [2024-11-20 08:31:05.049699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.609 [2024-11-20 08:31:05.054203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.609 [2024-11-20 08:31:05.054295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.609 [2024-11-20 08:31:05.054316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.609 [2024-11-20 08:31:05.058601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.609 [2024-11-20 08:31:05.058718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.609 [2024-11-20 08:31:05.058738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.609 [2024-11-20 08:31:05.062999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.609 [2024-11-20 08:31:05.063092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.609 [2024-11-20 08:31:05.063112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.609 [2024-11-20 08:31:05.067271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.609 [2024-11-20 08:31:05.067354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.609 [2024-11-20 08:31:05.067374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.609 [2024-11-20 08:31:05.071655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.609 [2024-11-20 08:31:05.071746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.609 [2024-11-20 08:31:05.071767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.609 [2024-11-20 08:31:05.076486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.609 [2024-11-20 08:31:05.076585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.609 [2024-11-20 08:31:05.076606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.609 [2024-11-20 08:31:05.080907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.609 [2024-11-20 08:31:05.081039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.609 [2024-11-20 08:31:05.081059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.609 [2024-11-20 08:31:05.085255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.609 [2024-11-20 08:31:05.085350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.610 [2024-11-20 08:31:05.085369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.610 [2024-11-20 08:31:05.089779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.610 [2024-11-20 08:31:05.089935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.610 [2024-11-20 08:31:05.089968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.610 [2024-11-20 08:31:05.094402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.610 [2024-11-20 08:31:05.094517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.610 [2024-11-20 08:31:05.094537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.610 [2024-11-20 08:31:05.099209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.610 [2024-11-20 08:31:05.099353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.610 [2024-11-20 08:31:05.099375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.610 [2024-11-20 08:31:05.103892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.610 [2024-11-20 08:31:05.104016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.610 [2024-11-20 08:31:05.104037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.610 [2024-11-20 08:31:05.108461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.610 [2024-11-20 08:31:05.108701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.610 [2024-11-20 08:31:05.108723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.610 [2024-11-20 08:31:05.113436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.610 [2024-11-20 08:31:05.113526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.610 [2024-11-20 08:31:05.113549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.610 [2024-11-20 08:31:05.118196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.610 [2024-11-20 08:31:05.118274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.610 [2024-11-20 08:31:05.118294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.610 [2024-11-20 08:31:05.122967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.610 [2024-11-20 08:31:05.123060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.610 [2024-11-20 08:31:05.123080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.610 [2024-11-20 08:31:05.127538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.610 [2024-11-20 08:31:05.127662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.610 [2024-11-20 08:31:05.127684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.610 [2024-11-20 08:31:05.131882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.610 [2024-11-20 08:31:05.132001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.610 [2024-11-20 08:31:05.132021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.610 [2024-11-20 08:31:05.135996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.610 [2024-11-20 08:31:05.136094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.610 [2024-11-20 08:31:05.136114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.610 [2024-11-20 08:31:05.140025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.610 [2024-11-20 08:31:05.140242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.610 [2024-11-20 08:31:05.140273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.610 [2024-11-20 08:31:05.144051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.610 [2024-11-20 08:31:05.144301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.610 [2024-11-20 08:31:05.144322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.610 [2024-11-20 08:31:05.148436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.610 [2024-11-20 08:31:05.148531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.610 [2024-11-20 08:31:05.148566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.610 [2024-11-20 08:31:05.152695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.610 [2024-11-20 08:31:05.152785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.610 [2024-11-20 08:31:05.152806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.610 [2024-11-20 08:31:05.157010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.610 [2024-11-20 08:31:05.157090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.610 [2024-11-20 08:31:05.157111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.610 [2024-11-20 08:31:05.161102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.610 [2024-11-20 08:31:05.161197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.610 [2024-11-20 08:31:05.161232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.610 [2024-11-20 08:31:05.165227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.610 [2024-11-20 08:31:05.165319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.610 [2024-11-20 08:31:05.165338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.869 [2024-11-20 08:31:05.169470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.869 [2024-11-20 08:31:05.169570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.869 [2024-11-20 08:31:05.169590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.869 [2024-11-20 08:31:05.173621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.869 [2024-11-20 08:31:05.173816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.869 [2024-11-20 08:31:05.173852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.869 [2024-11-20 08:31:05.177945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.869 [2024-11-20 08:31:05.178106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.869 [2024-11-20 08:31:05.178127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.869 [2024-11-20 08:31:05.181927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.869 [2024-11-20 08:31:05.182015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.869 [2024-11-20 08:31:05.182035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.869 [2024-11-20 08:31:05.186037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.869 [2024-11-20 08:31:05.186117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.869 [2024-11-20 08:31:05.186136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.869 [2024-11-20 08:31:05.190142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.869 [2024-11-20 08:31:05.190228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.869 [2024-11-20 08:31:05.190248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.869 [2024-11-20 08:31:05.194235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.869 [2024-11-20 08:31:05.194334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.869 [2024-11-20 08:31:05.194354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.869 [2024-11-20 08:31:05.198380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.869 [2024-11-20 08:31:05.198462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.869 [2024-11-20 08:31:05.198483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.869 [2024-11-20 08:31:05.202600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.869 [2024-11-20 08:31:05.202678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.869 [2024-11-20 08:31:05.202698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.869 [2024-11-20 08:31:05.206818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.869 [2024-11-20 08:31:05.206922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.869 [2024-11-20 08:31:05.206942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.869 [2024-11-20 08:31:05.210912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x202c8f0) with pdu=0x2000166ff3c8 00:18:17.870 [2024-11-20 08:31:05.211092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.870 [2024-11-20 08:31:05.211111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.870 6533.00 IOPS, 816.62 MiB/s 00:18:17.870 Latency(us) 00:18:17.870 [2024-11-20T08:31:05.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.870 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:17.870 nvme0n1 : 2.00 6530.20 816.28 0.00 0.00 2444.53 1630.95 5749.29 00:18:17.870 [2024-11-20T08:31:05.431Z] =================================================================================================================== 00:18:17.870 [2024-11-20T08:31:05.431Z] Total : 6530.20 816.28 0.00 0.00 2444.53 1630.95 5749.29 00:18:17.870 { 00:18:17.870 "results": [ 00:18:17.870 { 00:18:17.870 "job": "nvme0n1", 00:18:17.870 "core_mask": "0x2", 00:18:17.870 "workload": "randwrite", 00:18:17.870 "status": "finished", 00:18:17.870 "queue_depth": 16, 00:18:17.870 "io_size": 131072, 00:18:17.870 "runtime": 2.003307, 00:18:17.870 "iops": 6530.202310479622, 00:18:17.870 "mibps": 816.2752888099527, 00:18:17.870 "io_failed": 0, 00:18:17.870 "io_timeout": 0, 00:18:17.870 "avg_latency_us": 2444.534539339273, 00:18:17.870 "min_latency_us": 1630.9527272727273, 00:18:17.870 "max_latency_us": 5749.294545454545 00:18:17.870 } 00:18:17.870 ], 00:18:17.870 "core_count": 1 00:18:17.870 } 00:18:17.870 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:17.870 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:17.870 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:17.870 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:17.870 | .driver_specific 00:18:17.870 | .nvme_error 00:18:17.870 | .status_code 00:18:17.870 | .command_transient_transport_error' 00:18:18.129 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 422 > 0 )) 00:18:18.129 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80555 00:18:18.129 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' -z 80555 ']' 00:18:18.129 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill -0 80555 00:18:18.129 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # uname 00:18:18.129 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:18:18.129 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 80555 00:18:18.129 killing process with pid 80555 00:18:18.129 Received shutdown signal, test time was about 2.000000 seconds 00:18:18.129 00:18:18.129 Latency(us) 00:18:18.129 [2024-11-20T08:31:05.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.129 [2024-11-20T08:31:05.690Z] =================================================================================================================== 00:18:18.129 [2024-11-20T08:31:05.690Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:18.129 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:18:18.129 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:18:18.129 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@975 -- # echo 'killing process with pid 80555' 00:18:18.129 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # kill 80555 00:18:18.129 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@981 -- # wait 80555 00:18:18.389 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80369 00:18:18.389 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' -z 80369 ']' 00:18:18.389 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill -0 80369 00:18:18.389 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # uname 00:18:18.389 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:18:18.389 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 80369 00:18:18.389 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:18:18.389 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:18:18.389 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@975 -- # echo 'killing process with pid 80369' 00:18:18.389 killing process with pid 80369 00:18:18.389 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # kill 80369 00:18:18.389 08:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@981 -- # wait 80369 00:18:18.763 00:18:18.763 real 0m16.816s 00:18:18.763 user 0m33.239s 00:18:18.763 sys 0m4.574s 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1133 -- # xtrace_disable 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:18.763 ************************************ 00:18:18.763 END TEST nvmf_digest_error 00:18:18.763 ************************************ 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:18.763 rmmod nvme_tcp 00:18:18.763 rmmod nvme_fabrics 00:18:18.763 rmmod nvme_keyring 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80369 ']' 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80369 00:18:18.763 Process with pid 80369 is not found 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@957 -- # '[' -z 80369 ']' 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@961 -- # kill -0 80369 00:18:18.763 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 961: kill: (80369) - No such process 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@984 -- # echo 'Process with pid 80369 is not found' 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:18.763 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:19.022 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:19.022 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:19.022 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:19.022 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:19.022 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:19.022 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.022 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:19.022 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.022 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:18:19.022 ************************************ 00:18:19.022 END TEST nvmf_digest 00:18:19.022 ************************************ 00:18:19.022 00:18:19.022 real 0m33.516s 00:18:19.022 user 1m3.836s 00:18:19.022 sys 0m9.538s 00:18:19.022 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1133 -- # xtrace_disable 00:18:19.022 08:31:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:19.022 08:31:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:18:19.022 08:31:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:18:19.022 08:31:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:19.022 08:31:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:18:19.022 08:31:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1114 -- # xtrace_disable 00:18:19.022 08:31:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.022 ************************************ 00:18:19.022 START TEST nvmf_host_multipath 00:18:19.022 ************************************ 00:18:19.022 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:19.022 * Looking for test storage... 00:18:19.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:19.022 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:18:19.022 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1638 -- # lcov --version 00:18:19.022 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:18:19.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.283 --rc genhtml_branch_coverage=1 00:18:19.283 --rc genhtml_function_coverage=1 00:18:19.283 --rc genhtml_legend=1 00:18:19.283 --rc geninfo_all_blocks=1 00:18:19.283 --rc geninfo_unexecuted_blocks=1 00:18:19.283 00:18:19.283 ' 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:18:19.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.283 --rc genhtml_branch_coverage=1 00:18:19.283 --rc genhtml_function_coverage=1 00:18:19.283 --rc genhtml_legend=1 00:18:19.283 --rc geninfo_all_blocks=1 00:18:19.283 --rc geninfo_unexecuted_blocks=1 00:18:19.283 00:18:19.283 ' 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:18:19.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.283 --rc genhtml_branch_coverage=1 00:18:19.283 --rc genhtml_function_coverage=1 00:18:19.283 --rc genhtml_legend=1 00:18:19.283 --rc geninfo_all_blocks=1 00:18:19.283 --rc geninfo_unexecuted_blocks=1 00:18:19.283 00:18:19.283 ' 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:18:19.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.283 --rc genhtml_branch_coverage=1 00:18:19.283 --rc genhtml_function_coverage=1 00:18:19.283 --rc genhtml_legend=1 00:18:19.283 --rc geninfo_all_blocks=1 00:18:19.283 --rc geninfo_unexecuted_blocks=1 00:18:19.283 00:18:19.283 ' 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.283 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:19.284 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:19.284 Cannot find device "nvmf_init_br" 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:19.284 Cannot find device "nvmf_init_br2" 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:19.284 Cannot find device "nvmf_tgt_br" 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:19.284 Cannot find device "nvmf_tgt_br2" 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:19.284 Cannot find device "nvmf_init_br" 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:19.284 Cannot find device "nvmf_init_br2" 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:19.284 Cannot find device "nvmf_tgt_br" 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:19.284 Cannot find device "nvmf_tgt_br2" 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:19.284 Cannot find device "nvmf_br" 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:18:19.284 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:19.543 Cannot find device "nvmf_init_if" 00:18:19.543 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:18:19.543 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:19.543 Cannot find device "nvmf_init_if2" 00:18:19.543 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:18:19.543 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:19.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:19.543 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:18:19.543 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:19.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:19.543 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:18:19.543 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:19.543 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:19.543 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:19.543 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:19.543 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:19.543 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:19.543 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:19.543 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:19.543 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:19.544 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:19.544 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:19.544 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:19.544 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:19.544 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:19.544 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:19.544 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:19.544 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:19.544 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:19.544 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:19.544 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:19.544 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:19.544 08:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:19.544 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:19.544 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:19.544 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:19.544 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:19.544 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:19.544 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:19.544 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:19.544 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:19.544 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:19.544 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:19.544 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:19.544 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:19.544 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.119 ms 00:18:19.544 00:18:19.544 --- 10.0.0.3 ping statistics --- 00:18:19.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.544 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:18:19.544 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:19.544 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:19.544 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.094 ms 00:18:19.544 00:18:19.544 --- 10.0.0.4 ping statistics --- 00:18:19.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.544 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:18:19.544 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:19.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:19.544 00:18:19.544 --- 10.0.0.1 ping statistics --- 00:18:19.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.544 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:19.544 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:19.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:18:19.544 00:18:19.544 --- 10.0.0.2 ping statistics --- 00:18:19.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.544 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:18:19.544 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.544 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:18:19.544 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:19.544 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.544 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:19.544 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:19.544 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.544 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:19.544 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:19.803 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:19.803 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:19.803 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:19.803 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:19.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.803 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80883 00:18:19.803 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:19.803 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80883 00:18:19.803 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # '[' -z 80883 ']' 00:18:19.803 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.803 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@843 -- # local max_retries=100 00:18:19.803 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.803 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@847 -- # xtrace_disable 00:18:19.803 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:19.803 [2024-11-20 08:31:07.190736] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:18:19.803 [2024-11-20 08:31:07.191027] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.804 [2024-11-20 08:31:07.343118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:20.062 [2024-11-20 08:31:07.401896] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.062 [2024-11-20 08:31:07.402169] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.062 [2024-11-20 08:31:07.402343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.062 [2024-11-20 08:31:07.402491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.062 [2024-11-20 08:31:07.402533] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.062 [2024-11-20 08:31:07.406848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.062 [2024-11-20 08:31:07.406890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.062 [2024-11-20 08:31:07.464355] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:20.062 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:18:20.062 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@871 -- # return 0 00:18:20.062 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:20.062 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@735 -- # xtrace_disable 00:18:20.062 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:20.062 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.062 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80883 00:18:20.062 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:20.321 [2024-11-20 08:31:07.868299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.579 08:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:20.839 Malloc0 00:18:20.839 08:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:21.098 08:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:21.357 08:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:21.615 [2024-11-20 08:31:08.999619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:21.615 08:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:21.873 [2024-11-20 08:31:09.264006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:21.873 08:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80927 00:18:21.873 08:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:21.873 08:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:21.873 08:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80927 /var/tmp/bdevperf.sock 00:18:21.873 08:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # '[' -z 80927 ']' 00:18:21.873 08:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:21.873 08:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@843 -- # local max_retries=100 00:18:21.873 08:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:21.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:21.873 08:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@847 -- # xtrace_disable 00:18:21.873 08:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:22.805 08:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:18:22.806 08:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@871 -- # return 0 00:18:22.806 08:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:23.064 08:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:23.632 Nvme0n1 00:18:23.632 08:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:23.890 Nvme0n1 00:18:23.890 08:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:23.890 08:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:24.827 08:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:24.827 08:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:25.086 08:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:25.345 08:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:25.345 08:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80883 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:25.345 08:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80978 00:18:25.345 08:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:31.909 08:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:31.909 08:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:31.909 08:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:31.909 08:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:31.909 Attaching 4 probes... 00:18:31.909 @path[10.0.0.3, 4421]: 15689 00:18:31.909 @path[10.0.0.3, 4421]: 17000 00:18:31.909 @path[10.0.0.3, 4421]: 18960 00:18:31.909 @path[10.0.0.3, 4421]: 18265 00:18:31.909 @path[10.0.0.3, 4421]: 17552 00:18:31.909 08:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:31.909 08:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:31.909 08:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:31.909 08:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:31.909 08:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:31.909 08:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:31.909 08:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80978 00:18:31.909 08:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:31.909 08:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:31.909 08:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:31.909 08:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:32.168 08:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:32.168 08:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81091 00:18:32.168 08:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:32.168 08:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80883 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:38.733 08:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:38.733 08:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:38.733 08:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:38.733 08:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:38.733 Attaching 4 probes... 00:18:38.733 @path[10.0.0.3, 4420]: 16692 00:18:38.733 @path[10.0.0.3, 4420]: 17235 00:18:38.733 @path[10.0.0.3, 4420]: 16714 00:18:38.733 @path[10.0.0.3, 4420]: 15709 00:18:38.733 @path[10.0.0.3, 4420]: 16051 00:18:38.733 08:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:38.733 08:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:38.733 08:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:38.733 08:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:38.733 08:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:38.733 08:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:38.733 08:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81091 00:18:38.733 08:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:38.733 08:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:38.733 08:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:38.733 08:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:39.300 08:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:39.300 08:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81209 00:18:39.300 08:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80883 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:39.300 08:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:45.907 08:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:45.907 08:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:45.907 08:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:45.907 08:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:45.907 Attaching 4 probes... 00:18:45.907 @path[10.0.0.3, 4421]: 12884 00:18:45.907 @path[10.0.0.3, 4421]: 16153 00:18:45.907 @path[10.0.0.3, 4421]: 15741 00:18:45.907 @path[10.0.0.3, 4421]: 14463 00:18:45.907 @path[10.0.0.3, 4421]: 14362 00:18:45.907 08:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:45.907 08:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:45.907 08:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:45.907 08:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:45.907 08:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:45.907 08:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:45.907 08:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81209 00:18:45.907 08:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:45.907 08:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:45.907 08:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:45.907 08:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:46.165 08:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:46.165 08:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80883 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:46.165 08:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81326 00:18:46.165 08:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:52.731 08:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:52.731 08:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:52.731 08:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:52.731 08:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:52.731 Attaching 4 probes... 00:18:52.731 00:18:52.731 00:18:52.731 00:18:52.731 00:18:52.731 00:18:52.731 08:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:52.731 08:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:52.731 08:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:52.731 08:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:52.731 08:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:52.731 08:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:52.731 08:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81326 00:18:52.731 08:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:52.731 08:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:52.731 08:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:52.731 08:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:52.990 08:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:52.990 08:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81440 00:18:52.990 08:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80883 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:52.990 08:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:59.556 08:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:59.556 08:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:59.556 08:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:59.556 08:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:59.556 Attaching 4 probes... 00:18:59.556 @path[10.0.0.3, 4421]: 16315 00:18:59.556 @path[10.0.0.3, 4421]: 16208 00:18:59.556 @path[10.0.0.3, 4421]: 16532 00:18:59.556 @path[10.0.0.3, 4421]: 16560 00:18:59.556 @path[10.0.0.3, 4421]: 16220 00:18:59.556 08:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:59.556 08:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:59.556 08:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:59.556 08:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:59.556 08:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:59.556 08:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:59.556 08:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81440 00:18:59.556 08:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:59.556 08:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:59.814 08:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:19:00.750 08:31:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:19:00.750 08:31:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81562 00:19:00.750 08:31:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80883 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:00.750 08:31:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:07.334 08:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:07.334 08:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:07.334 08:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:07.334 08:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:07.334 Attaching 4 probes... 00:19:07.334 @path[10.0.0.3, 4420]: 16097 00:19:07.334 @path[10.0.0.3, 4420]: 16450 00:19:07.334 @path[10.0.0.3, 4420]: 16186 00:19:07.334 @path[10.0.0.3, 4420]: 15804 00:19:07.334 @path[10.0.0.3, 4420]: 15974 00:19:07.334 08:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:07.334 08:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:07.334 08:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:07.334 08:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:07.334 08:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:07.334 08:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:07.334 08:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81562 00:19:07.334 08:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:07.334 08:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:07.334 [2024-11-20 08:31:54.810212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:07.334 08:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:07.902 08:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:14.467 08:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:14.467 08:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81738 00:19:14.467 08:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80883 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:14.467 08:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:19.805 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:19.805 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:20.063 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:20.063 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:20.063 Attaching 4 probes... 00:19:20.063 @path[10.0.0.3, 4421]: 15939 00:19:20.063 @path[10.0.0.3, 4421]: 16328 00:19:20.063 @path[10.0.0.3, 4421]: 16077 00:19:20.063 @path[10.0.0.3, 4421]: 16232 00:19:20.063 @path[10.0.0.3, 4421]: 16128 00:19:20.063 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:20.063 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:20.063 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:20.063 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:20.063 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:20.063 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:20.063 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81738 00:19:20.063 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:20.063 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80927 00:19:20.063 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' -z 80927 ']' 00:19:20.063 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@961 -- # kill -0 80927 00:19:20.063 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # uname 00:19:20.063 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:19:20.063 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 80927 00:19:20.063 killing process with pid 80927 00:19:20.063 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@963 -- # process_name=reactor_2 00:19:20.063 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@967 -- # '[' reactor_2 = sudo ']' 00:19:20.063 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@975 -- # echo 'killing process with pid 80927' 00:19:20.063 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # kill 80927 00:19:20.063 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@981 -- # wait 80927 00:19:20.063 { 00:19:20.063 "results": [ 00:19:20.063 { 00:19:20.063 "job": "Nvme0n1", 00:19:20.063 "core_mask": "0x4", 00:19:20.063 "workload": "verify", 00:19:20.063 "status": "terminated", 00:19:20.063 "verify_range": { 00:19:20.063 "start": 0, 00:19:20.063 "length": 16384 00:19:20.063 }, 00:19:20.063 "queue_depth": 128, 00:19:20.063 "io_size": 4096, 00:19:20.063 "runtime": 56.173266, 00:19:20.063 "iops": 6944.922874877882, 00:19:20.063 "mibps": 27.128604979991728, 00:19:20.063 "io_failed": 0, 00:19:20.063 "io_timeout": 0, 00:19:20.063 "avg_latency_us": 18400.355624374755, 00:19:20.063 "min_latency_us": 1124.5381818181818, 00:19:20.063 "max_latency_us": 7046430.72 00:19:20.063 } 00:19:20.063 ], 00:19:20.064 "core_count": 1 00:19:20.064 } 00:19:20.342 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80927 00:19:20.342 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:20.342 [2024-11-20 08:31:09.348921] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:19:20.342 [2024-11-20 08:31:09.349044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80927 ] 00:19:20.342 [2024-11-20 08:31:09.501375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.342 [2024-11-20 08:31:09.569084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.342 [2024-11-20 08:31:09.626256] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:20.342 Running I/O for 90 seconds... 00:19:20.342 7950.00 IOPS, 31.05 MiB/s [2024-11-20T08:32:07.903Z] 8029.00 IOPS, 31.36 MiB/s [2024-11-20T08:32:07.903Z] 8062.00 IOPS, 31.49 MiB/s [2024-11-20T08:32:07.903Z] 8150.50 IOPS, 31.84 MiB/s [2024-11-20T08:32:07.903Z] 8408.40 IOPS, 32.85 MiB/s [2024-11-20T08:32:07.903Z] 8528.00 IOPS, 33.31 MiB/s [2024-11-20T08:32:07.903Z] 8570.29 IOPS, 33.48 MiB/s [2024-11-20T08:32:07.903Z] 8562.00 IOPS, 33.45 MiB/s [2024-11-20T08:32:07.903Z] [2024-11-20 08:31:19.647198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.342 [2024-11-20 08:31:19.647310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:20.342 [2024-11-20 08:31:19.647372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.342 [2024-11-20 08:31:19.647394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:20.342 [2024-11-20 08:31:19.647417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.342 [2024-11-20 08:31:19.647433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:20.342 [2024-11-20 08:31:19.647455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.342 [2024-11-20 08:31:19.647471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:20.342 [2024-11-20 08:31:19.647492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.342 [2024-11-20 08:31:19.647508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:20.342 [2024-11-20 08:31:19.647529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.342 [2024-11-20 08:31:19.647545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:20.342 [2024-11-20 08:31:19.647566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.342 [2024-11-20 08:31:19.647582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:20.342 [2024-11-20 08:31:19.647646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.342 [2024-11-20 08:31:19.647663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:20.342 [2024-11-20 08:31:19.647685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.342 [2024-11-20 08:31:19.647702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:20.342 [2024-11-20 08:31:19.647723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.342 [2024-11-20 08:31:19.647777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:20.342 [2024-11-20 08:31:19.647801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.342 [2024-11-20 08:31:19.647843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:20.342 [2024-11-20 08:31:19.647869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.342 [2024-11-20 08:31:19.647887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.647909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-20 08:31:19.647925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.647977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-20 08:31:19.647992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-20 08:31:19.648028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-20 08:31:19.648080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.648114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.648150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.648189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.648225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.648260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.648303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.648340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.648376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-20 08:31:19.648438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-20 08:31:19.648475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-20 08:31:19.648510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-20 08:31:19.648546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-20 08:31:19.648581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-20 08:31:19.648617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-20 08:31:19.648652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-20 08:31:19.648688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.648723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.648759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.648806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.648854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.648911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.648948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.648968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.648984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.649004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.649019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.649040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.649055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.649076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.649091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.649112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.649127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.649148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.649163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.649184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.649199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.649236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.649251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.649280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.649297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.649319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-20 08:31:19.649335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.649362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-20 08:31:19.649378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.649400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-20 08:31:19.649416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:20.343 [2024-11-20 08:31:19.649437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.344 [2024-11-20 08:31:19.649454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.649476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.344 [2024-11-20 08:31:19.649492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.649514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.344 [2024-11-20 08:31:19.649529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.649551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.344 [2024-11-20 08:31:19.649566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.649587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.344 [2024-11-20 08:31:19.649602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.649638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.344 [2024-11-20 08:31:19.649653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.649675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.344 [2024-11-20 08:31:19.649690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.649710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.344 [2024-11-20 08:31:19.649726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.649747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.344 [2024-11-20 08:31:19.649768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.649790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.344 [2024-11-20 08:31:19.649807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.649827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.344 [2024-11-20 08:31:19.649843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.649877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.344 [2024-11-20 08:31:19.649903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.649924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.344 [2024-11-20 08:31:19.649939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.649960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.344 [2024-11-20 08:31:19.649975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.650001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.344 [2024-11-20 08:31:19.650017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.650039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.344 [2024-11-20 08:31:19.650056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.650077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.344 [2024-11-20 08:31:19.650093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.650115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.344 [2024-11-20 08:31:19.650130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.650151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.344 [2024-11-20 08:31:19.650166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.650187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.344 [2024-11-20 08:31:19.650203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.650224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.344 [2024-11-20 08:31:19.650246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.650269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.344 [2024-11-20 08:31:19.650284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.650305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.344 [2024-11-20 08:31:19.650320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.650342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.344 [2024-11-20 08:31:19.650358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.650379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.344 [2024-11-20 08:31:19.650394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.650415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.344 [2024-11-20 08:31:19.650431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.650453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.344 [2024-11-20 08:31:19.650468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.650489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.344 [2024-11-20 08:31:19.650504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.650526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.344 [2024-11-20 08:31:19.650541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.650563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.344 [2024-11-20 08:31:19.650578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.650616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.344 [2024-11-20 08:31:19.650635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.650657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.344 [2024-11-20 08:31:19.650688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.650710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.344 [2024-11-20 08:31:19.650734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.650764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.344 [2024-11-20 08:31:19.650780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.650813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.344 [2024-11-20 08:31:19.650831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.650853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.344 [2024-11-20 08:31:19.650869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:20.344 [2024-11-20 08:31:19.650890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.344 [2024-11-20 08:31:19.650905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.650926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.345 [2024-11-20 08:31:19.650941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.650962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.345 [2024-11-20 08:31:19.650977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.650999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.345 [2024-11-20 08:31:19.651014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.345 [2024-11-20 08:31:19.651051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.345 [2024-11-20 08:31:19.651087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.345 [2024-11-20 08:31:19.651123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.345 [2024-11-20 08:31:19.651160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.345 [2024-11-20 08:31:19.651195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.345 [2024-11-20 08:31:19.651239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.345 [2024-11-20 08:31:19.651275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.345 [2024-11-20 08:31:19.651317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.345 [2024-11-20 08:31:19.651361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.345 [2024-11-20 08:31:19.651399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.345 [2024-11-20 08:31:19.651435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.345 [2024-11-20 08:31:19.651470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.345 [2024-11-20 08:31:19.651507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.345 [2024-11-20 08:31:19.651542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.345 [2024-11-20 08:31:19.651579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.345 [2024-11-20 08:31:19.651659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.345 [2024-11-20 08:31:19.651697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.345 [2024-11-20 08:31:19.651742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.345 [2024-11-20 08:31:19.651783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.345 [2024-11-20 08:31:19.651821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.345 [2024-11-20 08:31:19.651885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.345 [2024-11-20 08:31:19.651923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.345 [2024-11-20 08:31:19.651977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.651998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.345 [2024-11-20 08:31:19.652020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.652042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.345 [2024-11-20 08:31:19.652064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.652100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.345 [2024-11-20 08:31:19.652116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.652142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.345 [2024-11-20 08:31:19.652157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.652193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.345 [2024-11-20 08:31:19.652208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.652228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.345 [2024-11-20 08:31:19.652243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.653790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.345 [2024-11-20 08:31:19.653832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.653873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.345 [2024-11-20 08:31:19.653892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.653914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.345 [2024-11-20 08:31:19.653930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.653951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.345 [2024-11-20 08:31:19.653966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.653986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.345 [2024-11-20 08:31:19.654000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:20.345 [2024-11-20 08:31:19.654021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:19.654036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:19.654057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:19.654072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:19.654093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:19.654108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:19.654144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:19.654163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:20.346 8529.89 IOPS, 33.32 MiB/s [2024-11-20T08:32:07.907Z] 8526.50 IOPS, 33.31 MiB/s [2024-11-20T08:32:07.907Z] 8538.27 IOPS, 33.35 MiB/s [2024-11-20T08:32:07.907Z] 8512.75 IOPS, 33.25 MiB/s [2024-11-20T08:32:07.907Z] 8462.23 IOPS, 33.06 MiB/s [2024-11-20T08:32:07.907Z] 8433.79 IOPS, 32.94 MiB/s [2024-11-20T08:32:07.907Z] [2024-11-20 08:31:26.244477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.244566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.244601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.244633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.244654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.244668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.244720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.244736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.244755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.244770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.244788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.244802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.244836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.244855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.244875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.244890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.244924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.244943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.244963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.244977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.244996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.245010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.245043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.245076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.245108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.245140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.245183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.346 [2024-11-20 08:31:26.245217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.346 [2024-11-20 08:31:26.245255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.346 [2024-11-20 08:31:26.245289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.346 [2024-11-20 08:31:26.245323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.346 [2024-11-20 08:31:26.245358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.346 [2024-11-20 08:31:26.245391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.346 [2024-11-20 08:31:26.245424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.346 [2024-11-20 08:31:26.245458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.245492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.245532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.245565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.245609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.245644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.245677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.245710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.346 [2024-11-20 08:31:26.245750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.346 [2024-11-20 08:31:26.245783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.346 [2024-11-20 08:31:26.245834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:20.346 [2024-11-20 08:31:26.245854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.245868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.245887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.245901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.245921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.245935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.245954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.245968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.245988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.246001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.246050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.347 [2024-11-20 08:31:26.246105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.347 [2024-11-20 08:31:26.246139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.347 [2024-11-20 08:31:26.246174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.347 [2024-11-20 08:31:26.246209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.347 [2024-11-20 08:31:26.246243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.347 [2024-11-20 08:31:26.246294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.347 [2024-11-20 08:31:26.246329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.347 [2024-11-20 08:31:26.246364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.246400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.246437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.246487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.246521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.246577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.246610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.246644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.246677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.246711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.246744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.246777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.246810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.246844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.246890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.246941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.246975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.246995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.247016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.247038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.247053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.247073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.347 [2024-11-20 08:31:26.247088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:20.347 [2024-11-20 08:31:26.247108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.348 [2024-11-20 08:31:26.247123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.247143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.348 [2024-11-20 08:31:26.247157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.247177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.348 [2024-11-20 08:31:26.247191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.247211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.348 [2024-11-20 08:31:26.247242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.247263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.348 [2024-11-20 08:31:26.247278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.247303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.247318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.247339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.247354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.247374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.247389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.247410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.247425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.247445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.247466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.247508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.247523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.247570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.247607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.247646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.247661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.247683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.247698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.247731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.247747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.247768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.247783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.247805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.247832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.247856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.247872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.247893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.247923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.247944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.247958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.247978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.247993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.248013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.348 [2024-11-20 08:31:26.248040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.248068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.348 [2024-11-20 08:31:26.248083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.248118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.348 [2024-11-20 08:31:26.248132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.248151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.348 [2024-11-20 08:31:26.248166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.248185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.348 [2024-11-20 08:31:26.248199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.248229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.348 [2024-11-20 08:31:26.248260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.248281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.348 [2024-11-20 08:31:26.248296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.248317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.348 [2024-11-20 08:31:26.248332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.248353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.248368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.248395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.248411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.248432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.248447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.248468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.248484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.248505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.248520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.248548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.248564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.248615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.248644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.248664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.248678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.248701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.248716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.248735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.348 [2024-11-20 08:31:26.248749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:20.348 [2024-11-20 08:31:26.248769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.248783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.248802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.248817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.248836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.248850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.248869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.248883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.248902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.248945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.248967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.248983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.249004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.349 [2024-11-20 08:31:26.249018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.249039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.349 [2024-11-20 08:31:26.249061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.249082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.349 [2024-11-20 08:31:26.249097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.249117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.349 [2024-11-20 08:31:26.249131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.249151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.349 [2024-11-20 08:31:26.249165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.249185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.349 [2024-11-20 08:31:26.249199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.249236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.349 [2024-11-20 08:31:26.249250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.250541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.349 [2024-11-20 08:31:26.250586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.250628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.250660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.250680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.250694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.250714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.250728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.250753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.250767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.250786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.250800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.250819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.250844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.250866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.250880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.250927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.250947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.250968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.250983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.251003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.251018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.251037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.251051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.251071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.251085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.251104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.251118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.251138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.251152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.251171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.251185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.251446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.251471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.251496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.251513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.251533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.251548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.251622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.251640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.251661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.251678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.251699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.251713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.251734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.251750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.251771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.251786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.251958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.349 [2024-11-20 08:31:26.251984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:20.349 [2024-11-20 08:31:26.252019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.350 [2024-11-20 08:31:26.252036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.350 [2024-11-20 08:31:26.252086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.350 [2024-11-20 08:31:26.252121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.350 [2024-11-20 08:31:26.252155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.350 [2024-11-20 08:31:26.252188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.350 [2024-11-20 08:31:26.252253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.350 [2024-11-20 08:31:26.252301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.350 [2024-11-20 08:31:26.252338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.350 [2024-11-20 08:31:26.252374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.350 [2024-11-20 08:31:26.252411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.350 [2024-11-20 08:31:26.252447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.350 [2024-11-20 08:31:26.252483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.350 [2024-11-20 08:31:26.252519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.350 [2024-11-20 08:31:26.252556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.350 [2024-11-20 08:31:26.252622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.350 [2024-11-20 08:31:26.252674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.350 [2024-11-20 08:31:26.252716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.350 [2024-11-20 08:31:26.252750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.350 [2024-11-20 08:31:26.252790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.350 [2024-11-20 08:31:26.252824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.350 [2024-11-20 08:31:26.252858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.350 [2024-11-20 08:31:26.252890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.350 [2024-11-20 08:31:26.252942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.350 [2024-11-20 08:31:26.252976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.252995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.350 [2024-11-20 08:31:26.253009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.253028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.350 [2024-11-20 08:31:26.253042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.253062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.350 [2024-11-20 08:31:26.253076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.253095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.350 [2024-11-20 08:31:26.253109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.253128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.350 [2024-11-20 08:31:26.253142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.253161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.350 [2024-11-20 08:31:26.253176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.253196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.350 [2024-11-20 08:31:26.253232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.253648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.350 [2024-11-20 08:31:26.253673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.253704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.350 [2024-11-20 08:31:26.253720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.253745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.350 [2024-11-20 08:31:26.253759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.253778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.350 [2024-11-20 08:31:26.253792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.253811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.350 [2024-11-20 08:31:26.253825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.253844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.350 [2024-11-20 08:31:26.253858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.253892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.350 [2024-11-20 08:31:26.253907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.253927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.350 [2024-11-20 08:31:26.253942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:20.350 [2024-11-20 08:31:26.253961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.351 [2024-11-20 08:31:26.253975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.253994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.351 [2024-11-20 08:31:26.254027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.254046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.351 [2024-11-20 08:31:26.254061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.264506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.351 [2024-11-20 08:31:26.264555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.264618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.351 [2024-11-20 08:31:26.264654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.264684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.351 [2024-11-20 08:31:26.264705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.264734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.351 [2024-11-20 08:31:26.264755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.264784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.351 [2024-11-20 08:31:26.264826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.264861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.351 [2024-11-20 08:31:26.264883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.264913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.351 [2024-11-20 08:31:26.264934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.264964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.351 [2024-11-20 08:31:26.264985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.265014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.351 [2024-11-20 08:31:26.265034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.265063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.351 [2024-11-20 08:31:26.265084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.265113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.351 [2024-11-20 08:31:26.265133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.265162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.351 [2024-11-20 08:31:26.265182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.265211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.351 [2024-11-20 08:31:26.265243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.265284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.351 [2024-11-20 08:31:26.265306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.265335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.351 [2024-11-20 08:31:26.265356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.265385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.351 [2024-11-20 08:31:26.265406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.265435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.351 [2024-11-20 08:31:26.265456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.265485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.351 [2024-11-20 08:31:26.265504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.265533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.351 [2024-11-20 08:31:26.265554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.265583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.351 [2024-11-20 08:31:26.265615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.265644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.351 [2024-11-20 08:31:26.265664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.265702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.351 [2024-11-20 08:31:26.265724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.265754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.351 [2024-11-20 08:31:26.265774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.265820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.351 [2024-11-20 08:31:26.265845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.265875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.351 [2024-11-20 08:31:26.265896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.265925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.351 [2024-11-20 08:31:26.265956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.265986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.351 [2024-11-20 08:31:26.266007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.266037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.351 [2024-11-20 08:31:26.266057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.266086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.351 [2024-11-20 08:31:26.266106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.266135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.351 [2024-11-20 08:31:26.266155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.266183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.351 [2024-11-20 08:31:26.266204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:20.351 [2024-11-20 08:31:26.266246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.351 [2024-11-20 08:31:26.266266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.266295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.352 [2024-11-20 08:31:26.266315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.266344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.352 [2024-11-20 08:31:26.266364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.266393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.352 [2024-11-20 08:31:26.266413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.266442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.352 [2024-11-20 08:31:26.266462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.266491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.352 [2024-11-20 08:31:26.266511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.266540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.352 [2024-11-20 08:31:26.266583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.266613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.266634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.266663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.266684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.266713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.266733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.266762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.266783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.266825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.266849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.266878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.266899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.266927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.266948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.266976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.266997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.267025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.267046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.267075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.267095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.267124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.267144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.267174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.267203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.267233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.267254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.267283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.267304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.267333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.267353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.267382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.267403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.267432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.352 [2024-11-20 08:31:26.267452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.267482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.352 [2024-11-20 08:31:26.267502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.267531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.352 [2024-11-20 08:31:26.267552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.267580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.352 [2024-11-20 08:31:26.267629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.267660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.352 [2024-11-20 08:31:26.267680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.267709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.352 [2024-11-20 08:31:26.267729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.267758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.352 [2024-11-20 08:31:26.267778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.267819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.352 [2024-11-20 08:31:26.267842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.267882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.267903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.267933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.267954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.267983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.268003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.268032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.268053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.268092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.268112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.268141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.268161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.268190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.268210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.268252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.268272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:20.352 [2024-11-20 08:31:26.268303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.352 [2024-11-20 08:31:26.268324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.268353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.353 [2024-11-20 08:31:26.268374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.268403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.353 [2024-11-20 08:31:26.268424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.268453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.353 [2024-11-20 08:31:26.268473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.268509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.353 [2024-11-20 08:31:26.268530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.268559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.353 [2024-11-20 08:31:26.268580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.268619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.353 [2024-11-20 08:31:26.268639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.268668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.353 [2024-11-20 08:31:26.268688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.268717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.353 [2024-11-20 08:31:26.268737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.268765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.353 [2024-11-20 08:31:26.268786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.268830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.353 [2024-11-20 08:31:26.268853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.268881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.353 [2024-11-20 08:31:26.268901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.268930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.353 [2024-11-20 08:31:26.268950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.268978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.353 [2024-11-20 08:31:26.268999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.269028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.353 [2024-11-20 08:31:26.269048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.269076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.353 [2024-11-20 08:31:26.269097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.269126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.353 [2024-11-20 08:31:26.269155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.269192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.353 [2024-11-20 08:31:26.269214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.269242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.353 [2024-11-20 08:31:26.269263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.269292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.353 [2024-11-20 08:31:26.269312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.269340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.353 [2024-11-20 08:31:26.269361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.269389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.353 [2024-11-20 08:31:26.269411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.269440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.353 [2024-11-20 08:31:26.269460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.269489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.353 [2024-11-20 08:31:26.269509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.269538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.353 [2024-11-20 08:31:26.269559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.269587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.353 [2024-11-20 08:31:26.269607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.269636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.353 [2024-11-20 08:31:26.269656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.269685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.353 [2024-11-20 08:31:26.269705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.269734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.353 [2024-11-20 08:31:26.269761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.269791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.353 [2024-11-20 08:31:26.269827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.269858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.353 [2024-11-20 08:31:26.269878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.269906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.353 [2024-11-20 08:31:26.269928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.269957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.353 [2024-11-20 08:31:26.269978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.270007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.353 [2024-11-20 08:31:26.270028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.270057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.353 [2024-11-20 08:31:26.270078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.270107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.353 [2024-11-20 08:31:26.270127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.270155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.353 [2024-11-20 08:31:26.270176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.270205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.353 [2024-11-20 08:31:26.270238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.270266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.353 [2024-11-20 08:31:26.270287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:20.353 [2024-11-20 08:31:26.270316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.270336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.270364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.354 [2024-11-20 08:31:26.270385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.270423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.354 [2024-11-20 08:31:26.270444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.270473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.354 [2024-11-20 08:31:26.270493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.270521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.354 [2024-11-20 08:31:26.270542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.270583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.354 [2024-11-20 08:31:26.270604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.270633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.354 [2024-11-20 08:31:26.270653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.273056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.354 [2024-11-20 08:31:26.273098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.273136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.354 [2024-11-20 08:31:26.273160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.273190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.273212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.273242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.273263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.273292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.273312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.273341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.273361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.273391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.273412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.273463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.273486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.273515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.273536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.273565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.273586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.273614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.273635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.273663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.273684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.273713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.273733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.273762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.273782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.273828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.273852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.273881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.273901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.273931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.273952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.273981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.274003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.274031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.274052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.274081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.274111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.274141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.274163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.274192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.274212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.274241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.274262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.274290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.274310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.274340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.274361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.274389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.354 [2024-11-20 08:31:26.274410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.274438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.354 [2024-11-20 08:31:26.274459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.274488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.354 [2024-11-20 08:31:26.274508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.274536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.354 [2024-11-20 08:31:26.274557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.274586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.354 [2024-11-20 08:31:26.274606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.274635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.354 [2024-11-20 08:31:26.274656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.274690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.354 [2024-11-20 08:31:26.274720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.354 [2024-11-20 08:31:26.274758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.274779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.274823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.274846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.274875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.274897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.274925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.274946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.274975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.274996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.275025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.275045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.275074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.275095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.275123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.275144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.275172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.275193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.275222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.275242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.275271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.355 [2024-11-20 08:31:26.275291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.275320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.355 [2024-11-20 08:31:26.275340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.275377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.355 [2024-11-20 08:31:26.275398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.275427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.355 [2024-11-20 08:31:26.275447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.275476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.355 [2024-11-20 08:31:26.275496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.275525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.355 [2024-11-20 08:31:26.275545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.275574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.355 [2024-11-20 08:31:26.275622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.275655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.355 [2024-11-20 08:31:26.275676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.275705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.275735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.275756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.275771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.275791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.275806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.275840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.275857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.275877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.275892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.275913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.275927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.275955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.275971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.275991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.276006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.276026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.276042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.276062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.276076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.276097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.276126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.276146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.276160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.276180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.276195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.276257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.276277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.276299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.276314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.276335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.355 [2024-11-20 08:31:26.276350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.276371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.355 [2024-11-20 08:31:26.276385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.276405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.355 [2024-11-20 08:31:26.276420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.276440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.355 [2024-11-20 08:31:26.276465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.276487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.355 [2024-11-20 08:31:26.276502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:20.355 [2024-11-20 08:31:26.276523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.356 [2024-11-20 08:31:26.276538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.276559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.356 [2024-11-20 08:31:26.276574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.276594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.356 [2024-11-20 08:31:26.276624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.276644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.356 [2024-11-20 08:31:26.276658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.276678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.276693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.276712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.276727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.276747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.276761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.276781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.276796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.276816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.276830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.276864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.276883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.276902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.276924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.276946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.276961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.276981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.276995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.277015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.277030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.277050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.277065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.277085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.277100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.277119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.277134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.277154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.277168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.277188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.277203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.277222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.277237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.277256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.277271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.277291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.277305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.277325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.277339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.277365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.277380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.277400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.277415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.277434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.277449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.277468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.277483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.277510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.356 [2024-11-20 08:31:26.277526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.277546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.356 [2024-11-20 08:31:26.277561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.277581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.356 [2024-11-20 08:31:26.277596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.277615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.356 [2024-11-20 08:31:26.277630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.277650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.356 [2024-11-20 08:31:26.277664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.277684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.356 [2024-11-20 08:31:26.277699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.277718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.356 [2024-11-20 08:31:26.277733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.277753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.356 [2024-11-20 08:31:26.277768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:20.356 [2024-11-20 08:31:26.277796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.356 [2024-11-20 08:31:26.277834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.277855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.357 [2024-11-20 08:31:26.277870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.277890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.357 [2024-11-20 08:31:26.277904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.277924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.357 [2024-11-20 08:31:26.277939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.277958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.357 [2024-11-20 08:31:26.277973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.277992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.357 [2024-11-20 08:31:26.278007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.278030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.357 [2024-11-20 08:31:26.278045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.278065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.357 [2024-11-20 08:31:26.278079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.278105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.357 [2024-11-20 08:31:26.278121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.278140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.357 [2024-11-20 08:31:26.278154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.278181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.357 [2024-11-20 08:31:26.278196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.278216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.357 [2024-11-20 08:31:26.278230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.278250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.357 [2024-11-20 08:31:26.278271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.278291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.357 [2024-11-20 08:31:26.278306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.278326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.357 [2024-11-20 08:31:26.278340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.278359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.357 [2024-11-20 08:31:26.278374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.278393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.357 [2024-11-20 08:31:26.278408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.278427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.357 [2024-11-20 08:31:26.278441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.278461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.357 [2024-11-20 08:31:26.278476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.278496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.357 [2024-11-20 08:31:26.278511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.278530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.357 [2024-11-20 08:31:26.278545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.278565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.357 [2024-11-20 08:31:26.278580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.280402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.357 [2024-11-20 08:31:26.280432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.280458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.357 [2024-11-20 08:31:26.280475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.280497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.357 [2024-11-20 08:31:26.280524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.280546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.357 [2024-11-20 08:31:26.280561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.280582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.357 [2024-11-20 08:31:26.280598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.280633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.357 [2024-11-20 08:31:26.280647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.280667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.357 [2024-11-20 08:31:26.280681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.280701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.357 [2024-11-20 08:31:26.280715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.280735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.357 [2024-11-20 08:31:26.280749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.280770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.357 [2024-11-20 08:31:26.280784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.280804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.357 [2024-11-20 08:31:26.280818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.280838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.357 [2024-11-20 08:31:26.280868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.280891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.357 [2024-11-20 08:31:26.280906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.280926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.357 [2024-11-20 08:31:26.280940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.280959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.357 [2024-11-20 08:31:26.280974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.281002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.357 [2024-11-20 08:31:26.281017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.281037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.357 [2024-11-20 08:31:26.281052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:20.357 [2024-11-20 08:31:26.281072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.357 [2024-11-20 08:31:26.281086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-11-20 08:31:26.281121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-11-20 08:31:26.281155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-11-20 08:31:26.281189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-11-20 08:31:26.281223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-11-20 08:31:26.281257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-11-20 08:31:26.281291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-11-20 08:31:26.281325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-11-20 08:31:26.281359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-11-20 08:31:26.281394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.358 [2024-11-20 08:31:26.281436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.358 [2024-11-20 08:31:26.281470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.358 [2024-11-20 08:31:26.281504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.358 [2024-11-20 08:31:26.281540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.358 [2024-11-20 08:31:26.281594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.358 [2024-11-20 08:31:26.281630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.358 [2024-11-20 08:31:26.281665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.358 [2024-11-20 08:31:26.281699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.358 [2024-11-20 08:31:26.281733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.358 [2024-11-20 08:31:26.281767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.358 [2024-11-20 08:31:26.281813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.358 [2024-11-20 08:31:26.281851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.358 [2024-11-20 08:31:26.281895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.358 [2024-11-20 08:31:26.281929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.358 [2024-11-20 08:31:26.281963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.281983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.358 [2024-11-20 08:31:26.281997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.282017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-11-20 08:31:26.282031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.282051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-11-20 08:31:26.282065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.282085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-11-20 08:31:26.282099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.282119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-11-20 08:31:26.282132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.282152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-11-20 08:31:26.282166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.282186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-11-20 08:31:26.282200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.282221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-11-20 08:31:26.282235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.282254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.358 [2024-11-20 08:31:26.282268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.282288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.358 [2024-11-20 08:31:26.282309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.282329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.358 [2024-11-20 08:31:26.282344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.282363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.358 [2024-11-20 08:31:26.282378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.282398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.358 [2024-11-20 08:31:26.282413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.282444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.358 [2024-11-20 08:31:26.282462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:20.358 [2024-11-20 08:31:26.282483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.358 [2024-11-20 08:31:26.282498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.282518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.282532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.282552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.282566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.282586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.282600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.282628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.282642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.282662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.282684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.282704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.282718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.282738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.282759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.282780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.282795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.282833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.282850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.282870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.282884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.282904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.359 [2024-11-20 08:31:26.282918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.282944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.359 [2024-11-20 08:31:26.282958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.282978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.359 [2024-11-20 08:31:26.282993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.359 [2024-11-20 08:31:26.283027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.359 [2024-11-20 08:31:26.283060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.359 [2024-11-20 08:31:26.283094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.359 [2024-11-20 08:31:26.283128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.359 [2024-11-20 08:31:26.283161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.283196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.283239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.283272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.283307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.283346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.283381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.283417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.283452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.283494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.283536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.283570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.283631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.283668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.283711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.283746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.283781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.283826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.283866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.283902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.359 [2024-11-20 08:31:26.283937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:20.359 [2024-11-20 08:31:26.283957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.360 [2024-11-20 08:31:26.283972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.283992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.360 [2024-11-20 08:31:26.284007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.284027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.360 [2024-11-20 08:31:26.284042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.284062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.360 [2024-11-20 08:31:26.284077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.284111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.360 [2024-11-20 08:31:26.284126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.284151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.360 [2024-11-20 08:31:26.284173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.284194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.360 [2024-11-20 08:31:26.284210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.284246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.360 [2024-11-20 08:31:26.284261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.284281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.360 [2024-11-20 08:31:26.284296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.284316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.360 [2024-11-20 08:31:26.284331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.284351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.360 [2024-11-20 08:31:26.284366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.284386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.360 [2024-11-20 08:31:26.284401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.284421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.360 [2024-11-20 08:31:26.284435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.284455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.360 [2024-11-20 08:31:26.284470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.284490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.360 [2024-11-20 08:31:26.284505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.284525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.360 [2024-11-20 08:31:26.284540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.284575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.360 [2024-11-20 08:31:26.284594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.284615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.360 [2024-11-20 08:31:26.284652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.284674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.360 [2024-11-20 08:31:26.284689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.284709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.360 [2024-11-20 08:31:26.284723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.284743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.360 [2024-11-20 08:31:26.292626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.292683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.360 [2024-11-20 08:31:26.292720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.292745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.360 [2024-11-20 08:31:26.292761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.292788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.360 [2024-11-20 08:31:26.292803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.292840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.360 [2024-11-20 08:31:26.292859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.292881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.360 [2024-11-20 08:31:26.292896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.292917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.360 [2024-11-20 08:31:26.292948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.292969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.360 [2024-11-20 08:31:26.292984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.293005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.360 [2024-11-20 08:31:26.293036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.293057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.360 [2024-11-20 08:31:26.293073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.293109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.360 [2024-11-20 08:31:26.293126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.293148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.360 [2024-11-20 08:31:26.293163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:20.360 [2024-11-20 08:31:26.293696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.360 [2024-11-20 08:31:26.293724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.360 8363.80 IOPS, 32.67 MiB/s [2024-11-20T08:32:07.921Z] 7873.19 IOPS, 30.75 MiB/s [2024-11-20T08:32:07.921Z] 7872.18 IOPS, 30.75 MiB/s [2024-11-20T08:32:07.921Z] 7880.17 IOPS, 30.78 MiB/s [2024-11-20T08:32:07.921Z] 7872.16 IOPS, 30.75 MiB/s [2024-11-20T08:32:07.921Z] 7838.35 IOPS, 30.62 MiB/s [2024-11-20T08:32:07.921Z] 7810.43 IOPS, 30.51 MiB/s [2024-11-20T08:32:07.921Z] 7781.59 IOPS, 30.40 MiB/s [2024-11-20T08:32:07.921Z] [2024-11-20 08:31:33.485836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.360 [2024-11-20 08:31:33.485930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.485995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.361 [2024-11-20 08:31:33.486015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.486048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.361 [2024-11-20 08:31:33.486063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.486099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.361 [2024-11-20 08:31:33.486113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.486133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.361 [2024-11-20 08:31:33.486148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.486179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.361 [2024-11-20 08:31:33.486194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.486214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.361 [2024-11-20 08:31:33.486229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.486249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.361 [2024-11-20 08:31:33.486264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.486284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.361 [2024-11-20 08:31:33.486341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.486364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.361 [2024-11-20 08:31:33.486378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.486398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.361 [2024-11-20 08:31:33.486412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.486432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.361 [2024-11-20 08:31:33.486446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.486465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.361 [2024-11-20 08:31:33.486478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.486497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.361 [2024-11-20 08:31:33.486511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.486531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.361 [2024-11-20 08:31:33.486544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.486564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.361 [2024-11-20 08:31:33.486577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.486596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.361 [2024-11-20 08:31:33.486610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.486631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.361 [2024-11-20 08:31:33.486645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.486664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.361 [2024-11-20 08:31:33.486678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.486698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.361 [2024-11-20 08:31:33.486713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.486732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.361 [2024-11-20 08:31:33.486754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.486775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.361 [2024-11-20 08:31:33.486789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.486809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.361 [2024-11-20 08:31:33.486836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.486857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.361 [2024-11-20 08:31:33.486872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.487265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.361 [2024-11-20 08:31:33.487291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.487317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.361 [2024-11-20 08:31:33.487333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.487354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.361 [2024-11-20 08:31:33.487369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.487390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.361 [2024-11-20 08:31:33.487404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.487425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.361 [2024-11-20 08:31:33.487439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.487460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.361 [2024-11-20 08:31:33.487475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.487495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.361 [2024-11-20 08:31:33.487510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.487531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.361 [2024-11-20 08:31:33.487545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.487566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.361 [2024-11-20 08:31:33.487580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.487666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.361 [2024-11-20 08:31:33.487685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.487708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.361 [2024-11-20 08:31:33.487724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.487747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.361 [2024-11-20 08:31:33.487762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.487785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.361 [2024-11-20 08:31:33.487801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.487823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.361 [2024-11-20 08:31:33.487852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.487877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.361 [2024-11-20 08:31:33.487893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:20.361 [2024-11-20 08:31:33.487916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.361 [2024-11-20 08:31:33.487931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.487968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.362 [2024-11-20 08:31:33.487982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.362 [2024-11-20 08:31:33.488018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.362 [2024-11-20 08:31:33.488055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.362 [2024-11-20 08:31:33.488105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.362 [2024-11-20 08:31:33.488140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.362 [2024-11-20 08:31:33.488191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.362 [2024-11-20 08:31:33.488265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.362 [2024-11-20 08:31:33.488301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.362 [2024-11-20 08:31:33.488349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.362 [2024-11-20 08:31:33.488386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.362 [2024-11-20 08:31:33.488434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.362 [2024-11-20 08:31:33.488470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.362 [2024-11-20 08:31:33.488506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.362 [2024-11-20 08:31:33.488542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.362 [2024-11-20 08:31:33.488578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.362 [2024-11-20 08:31:33.488629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.362 [2024-11-20 08:31:33.488664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.362 [2024-11-20 08:31:33.488704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.362 [2024-11-20 08:31:33.488741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.362 [2024-11-20 08:31:33.488775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.362 [2024-11-20 08:31:33.488810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.362 [2024-11-20 08:31:33.488845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.362 [2024-11-20 08:31:33.488894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.362 [2024-11-20 08:31:33.488931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.362 [2024-11-20 08:31:33.488976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.488998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.362 [2024-11-20 08:31:33.489013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.489034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.362 [2024-11-20 08:31:33.489049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.489081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.362 [2024-11-20 08:31:33.489095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.489115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.362 [2024-11-20 08:31:33.489130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.489151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.362 [2024-11-20 08:31:33.489173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.489194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.362 [2024-11-20 08:31:33.489209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.489229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.362 [2024-11-20 08:31:33.489260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.489282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.362 [2024-11-20 08:31:33.489296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.489317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.362 [2024-11-20 08:31:33.489332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.489353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.362 [2024-11-20 08:31:33.489368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.489389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.362 [2024-11-20 08:31:33.489404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.489425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.362 [2024-11-20 08:31:33.489439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.489461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.362 [2024-11-20 08:31:33.489477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:20.362 [2024-11-20 08:31:33.489498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.489513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.489535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.489550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.489572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.489587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.489624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.489644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.489666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.489681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.489701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.489715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.489736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.489750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.489772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.489801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.489823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.489837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.489870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.489886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.489911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.363 [2024-11-20 08:31:33.489927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.489948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.363 [2024-11-20 08:31:33.489963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.489985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.363 [2024-11-20 08:31:33.490000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.363 [2024-11-20 08:31:33.490035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.363 [2024-11-20 08:31:33.490072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.363 [2024-11-20 08:31:33.490123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.363 [2024-11-20 08:31:33.490169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.363 [2024-11-20 08:31:33.490204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.490240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.490276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.490311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.490347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.490383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.490418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.490453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.490488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.490522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.490557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.490600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.490635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.490670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.490704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.490740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.490774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.490835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.490874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.490910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.363 [2024-11-20 08:31:33.490946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:20.363 [2024-11-20 08:31:33.490967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.364 [2024-11-20 08:31:33.490981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:33.491002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.364 [2024-11-20 08:31:33.491016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:33.491037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.364 [2024-11-20 08:31:33.491064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:33.491086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.364 [2024-11-20 08:31:33.491100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:33.491140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:33.491159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:33.491181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:33.491196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:33.491217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:33.491231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:33.491251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:33.491266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:33.491287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:33.491301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:33.491321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:33.491336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:33.491356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:33.491370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:33.491391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:33.491405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:20.364 7482.57 IOPS, 29.23 MiB/s [2024-11-20T08:32:07.925Z] 7170.79 IOPS, 28.01 MiB/s [2024-11-20T08:32:07.925Z] 6883.96 IOPS, 26.89 MiB/s [2024-11-20T08:32:07.925Z] 6619.19 IOPS, 25.86 MiB/s [2024-11-20T08:32:07.925Z] 6374.04 IOPS, 24.90 MiB/s [2024-11-20T08:32:07.925Z] 6146.39 IOPS, 24.01 MiB/s [2024-11-20T08:32:07.925Z] 5934.45 IOPS, 23.18 MiB/s [2024-11-20T08:32:07.925Z] 5977.47 IOPS, 23.35 MiB/s [2024-11-20T08:32:07.925Z] 6049.16 IOPS, 23.63 MiB/s [2024-11-20T08:32:07.925Z] 6113.62 IOPS, 23.88 MiB/s [2024-11-20T08:32:07.925Z] 6180.48 IOPS, 24.14 MiB/s [2024-11-20T08:32:07.925Z] 6241.06 IOPS, 24.38 MiB/s [2024-11-20T08:32:07.925Z] 6296.00 IOPS, 24.59 MiB/s [2024-11-20T08:32:07.925Z] [2024-11-20 08:31:47.146783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:47.146900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:47.146978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:47.147031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:47.147055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:47.147070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:47.147090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:47.147104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:47.147124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:47.147139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:47.147158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:47.147173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:47.147193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:47.147207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:47.147227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:47.147241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:47.147261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:47.147291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:47.147310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:47.147324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:47.147359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:47.147373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:47.147393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:47.147423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:47.147443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:47.147457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:47.147477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:47.147499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:47.147520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:47.147534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:47.147554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.364 [2024-11-20 08:31:47.147569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:47.147589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.364 [2024-11-20 08:31:47.147631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:47.147657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.364 [2024-11-20 08:31:47.147689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:47.147710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.364 [2024-11-20 08:31:47.147726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:47.147746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.364 [2024-11-20 08:31:47.147762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:20.364 [2024-11-20 08:31:47.147783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.364 [2024-11-20 08:31:47.147798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.147820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.365 [2024-11-20 08:31:47.147847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.147870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.365 [2024-11-20 08:31:47.147916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.147937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.365 [2024-11-20 08:31:47.147968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.148065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.148095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.148155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.148183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.148227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.148270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.148298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.148326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.148368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.148397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.148426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.148468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.148497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.148525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.148577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.148621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.148649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.148678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.148706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.148733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.365 [2024-11-20 08:31:47.148761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.365 [2024-11-20 08:31:47.148790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.365 [2024-11-20 08:31:47.148832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.365 [2024-11-20 08:31:47.148875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.365 [2024-11-20 08:31:47.148917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.365 [2024-11-20 08:31:47.148961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.148975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.365 [2024-11-20 08:31:47.148988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.149022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.365 [2024-11-20 08:31:47.149038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.149053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.149067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.149081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.149094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.149108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.149122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.149136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.149149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.149163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.149176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.149205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.365 [2024-11-20 08:31:47.149218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.365 [2024-11-20 08:31:47.149231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.366 [2024-11-20 08:31:47.149243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.366 [2024-11-20 08:31:47.149269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.366 [2024-11-20 08:31:47.149312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.366 [2024-11-20 08:31:47.149339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.366 [2024-11-20 08:31:47.149366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.366 [2024-11-20 08:31:47.149399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.366 [2024-11-20 08:31:47.149434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.366 [2024-11-20 08:31:47.149461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.366 [2024-11-20 08:31:47.149488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.366 [2024-11-20 08:31:47.149515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.366 [2024-11-20 08:31:47.149542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.366 [2024-11-20 08:31:47.149569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.366 [2024-11-20 08:31:47.149596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.366 [2024-11-20 08:31:47.149637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.366 [2024-11-20 08:31:47.149663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.366 [2024-11-20 08:31:47.149689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.366 [2024-11-20 08:31:47.149715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.366 [2024-11-20 08:31:47.149740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.366 [2024-11-20 08:31:47.149788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.366 [2024-11-20 08:31:47.149816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.366 [2024-11-20 08:31:47.149843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.366 [2024-11-20 08:31:47.149897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.366 [2024-11-20 08:31:47.149932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.366 [2024-11-20 08:31:47.149962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.149977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.366 [2024-11-20 08:31:47.149990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.150005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.366 [2024-11-20 08:31:47.150018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.150033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.366 [2024-11-20 08:31:47.150046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.150061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.366 [2024-11-20 08:31:47.150074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.150088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.366 [2024-11-20 08:31:47.150101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.150130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.366 [2024-11-20 08:31:47.150143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.150157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.366 [2024-11-20 08:31:47.150176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.150191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.366 [2024-11-20 08:31:47.150204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.150218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.366 [2024-11-20 08:31:47.150245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.150259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.366 [2024-11-20 08:31:47.150271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.150285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.366 [2024-11-20 08:31:47.150297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.150311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.366 [2024-11-20 08:31:47.150340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.150354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.366 [2024-11-20 08:31:47.150367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.150381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.366 [2024-11-20 08:31:47.150394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.150414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.366 [2024-11-20 08:31:47.150427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.366 [2024-11-20 08:31:47.150449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.366 [2024-11-20 08:31:47.150461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.367 [2024-11-20 08:31:47.150476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.367 [2024-11-20 08:31:47.150488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.367 [2024-11-20 08:31:47.150502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.367 [2024-11-20 08:31:47.150515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.367 [2024-11-20 08:31:47.150530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.367 [2024-11-20 08:31:47.150542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.367 [2024-11-20 08:31:47.150556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.367 [2024-11-20 08:31:47.150575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.367 [2024-11-20 08:31:47.150590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.367 [2024-11-20 08:31:47.150603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.367 [2024-11-20 08:31:47.150617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.367 [2024-11-20 08:31:47.150629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.367 [2024-11-20 08:31:47.150643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.367 [2024-11-20 08:31:47.150656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.367 [2024-11-20 08:31:47.150685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.367 [2024-11-20 08:31:47.150697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.367 [2024-11-20 08:31:47.150711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.367 [2024-11-20 08:31:47.150723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.367 [2024-11-20 08:31:47.150737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.367 [2024-11-20 08:31:47.150750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.367 [2024-11-20 08:31:47.150779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.367 [2024-11-20 08:31:47.150793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.367 [2024-11-20 08:31:47.150806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.367 [2024-11-20 08:31:47.150819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.367 [2024-11-20 08:31:47.150833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.367 [2024-11-20 08:31:47.150846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.367 [2024-11-20 08:31:47.150860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c10290 is same with the state(6) to be set 00:19:20.367 [2024-11-20 08:31:47.150875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.367 [2024-11-20 08:31:47.150900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.367 [2024-11-20 08:31:47.150912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110872 len:8 PRP1 0x0 PRP2 0x0 00:19:20.367 [2024-11-20 08:31:47.150931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.367 [2024-11-20 08:31:47.150944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.367 [2024-11-20 08:31:47.150954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.367 [2024-11-20 08:31:47.150970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111392 len:8 PRP1 0x0 PRP2 0x0 00:19:20.367 [2024-11-20 08:31:47.150983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.367 [2024-11-20 08:31:47.150996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.367 [2024-11-20 08:31:47.151005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.367 [2024-11-20 08:31:47.151015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111400 len:8 PRP1 0x0 PRP2 0x0 00:19:20.367 [2024-11-20 08:31:47.151027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.367 [2024-11-20 08:31:47.151040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.367 [2024-11-20 08:31:47.151049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.367 [2024-11-20 08:31:47.151059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111408 len:8 PRP1 0x0 PRP2 0x0 00:19:20.367 [2024-11-20 08:31:47.151072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.367 [2024-11-20 08:31:47.151085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.367 [2024-11-20 08:31:47.151094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.367 [2024-11-20 08:31:47.151119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111416 len:8 PRP1 0x0 PRP2 0x0 00:19:20.367 [2024-11-20 08:31:47.151131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.367 [2024-11-20 08:31:47.151143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.367 [2024-11-20 08:31:47.151168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.367 [2024-11-20 08:31:47.151178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111424 len:8 PRP1 0x0 PRP2 0x0 00:19:20.367 [2024-11-20 08:31:47.151190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.367 [2024-11-20 08:31:47.151203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.367 [2024-11-20 08:31:47.151212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.367 [2024-11-20 08:31:47.151222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111432 len:8 PRP1 0x0 PRP2 0x0 00:19:20.368 [2024-11-20 08:31:47.151235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.368 [2024-11-20 08:31:47.151247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.368 [2024-11-20 08:31:47.151256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.368 [2024-11-20 08:31:47.151266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111440 len:8 PRP1 0x0 PRP2 0x0 00:19:20.368 [2024-11-20 08:31:47.151279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.368 [2024-11-20 08:31:47.151291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.368 [2024-11-20 08:31:47.151306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.368 [2024-11-20 08:31:47.151316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111448 len:8 PRP1 0x0 PRP2 0x0 00:19:20.368 [2024-11-20 08:31:47.151330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.368 [2024-11-20 08:31:47.151343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.368 [2024-11-20 08:31:47.151357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.368 [2024-11-20 08:31:47.151367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111456 len:8 PRP1 0x0 PRP2 0x0 00:19:20.368 [2024-11-20 08:31:47.151380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.368 [2024-11-20 08:31:47.151392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.368 [2024-11-20 08:31:47.151402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.368 [2024-11-20 08:31:47.151412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111464 len:8 PRP1 0x0 PRP2 0x0 00:19:20.368 [2024-11-20 08:31:47.151424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.368 [2024-11-20 08:31:47.151437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.368 [2024-11-20 08:31:47.151446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.368 [2024-11-20 08:31:47.151455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111472 len:8 PRP1 0x0 PRP2 0x0 00:19:20.368 [2024-11-20 08:31:47.151468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.368 [2024-11-20 08:31:47.151492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.368 [2024-11-20 08:31:47.151501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.368 [2024-11-20 08:31:47.151510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111480 len:8 PRP1 0x0 PRP2 0x0 00:19:20.368 [2024-11-20 08:31:47.151522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.368 [2024-11-20 08:31:47.151534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.368 [2024-11-20 08:31:47.151543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.368 [2024-11-20 08:31:47.151552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111488 len:8 PRP1 0x0 PRP2 0x0 00:19:20.368 [2024-11-20 08:31:47.151564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.368 [2024-11-20 08:31:47.151576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.368 [2024-11-20 08:31:47.151585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.368 [2024-11-20 08:31:47.151594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111496 len:8 PRP1 0x0 PRP2 0x0 00:19:20.368 [2024-11-20 08:31:47.151633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.368 [2024-11-20 08:31:47.151664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.368 [2024-11-20 08:31:47.151675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.368 [2024-11-20 08:31:47.151685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111504 len:8 PRP1 0x0 PRP2 0x0 00:19:20.368 [2024-11-20 08:31:47.151699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.368 [2024-11-20 08:31:47.151713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.368 [2024-11-20 08:31:47.151730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.368 [2024-11-20 08:31:47.151741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111512 len:8 PRP1 0x0 PRP2 0x0 00:19:20.368 [2024-11-20 08:31:47.151755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.368 [2024-11-20 08:31:47.151997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.368 [2024-11-20 08:31:47.152024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.368 [2024-11-20 08:31:47.152040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.368 [2024-11-20 08:31:47.152053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.368 [2024-11-20 08:31:47.152067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.368 [2024-11-20 08:31:47.152081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.368 [2024-11-20 08:31:47.152110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.368 [2024-11-20 08:31:47.152136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.368 [2024-11-20 08:31:47.152150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.368 [2024-11-20 08:31:47.152164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.368 [2024-11-20 08:31:47.152182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b811d0 is same with the state(6) to be set 00:19:20.368 [2024-11-20 08:31:47.153425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:20.368 [2024-11-20 08:31:47.153480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b811d0 (9): Bad file descriptor 00:19:20.368 [2024-11-20 08:31:47.153907] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:20.368 [2024-11-20 08:31:47.153953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b811d0 with addr=10.0.0.3, port=4421 00:19:20.368 [2024-11-20 08:31:47.153970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b811d0 is same with the state(6) to be set 00:19:20.368 [2024-11-20 08:31:47.154064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b811d0 (9): Bad file descriptor 00:19:20.368 [2024-11-20 08:31:47.154099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:20.368 [2024-11-20 08:31:47.154115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:20.368 [2024-11-20 08:31:47.154128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:20.368 [2024-11-20 08:31:47.154144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:20.369 [2024-11-20 08:31:47.154158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:20.369 6349.03 IOPS, 24.80 MiB/s [2024-11-20T08:32:07.930Z] 6404.03 IOPS, 25.02 MiB/s [2024-11-20T08:32:07.930Z] 6451.08 IOPS, 25.20 MiB/s [2024-11-20T08:32:07.930Z] 6495.10 IOPS, 25.37 MiB/s [2024-11-20T08:32:07.930Z] 6536.52 IOPS, 25.53 MiB/s [2024-11-20T08:32:07.930Z] 6571.63 IOPS, 25.67 MiB/s [2024-11-20T08:32:07.930Z] 6603.17 IOPS, 25.79 MiB/s [2024-11-20T08:32:07.930Z] 6636.02 IOPS, 25.92 MiB/s [2024-11-20T08:32:07.930Z] 6651.02 IOPS, 25.98 MiB/s [2024-11-20T08:32:07.930Z] 6661.62 IOPS, 26.02 MiB/s [2024-11-20T08:32:07.930Z] [2024-11-20 08:31:57.219371] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:20.369 6690.50 IOPS, 26.13 MiB/s [2024-11-20T08:32:07.930Z] 6721.94 IOPS, 26.26 MiB/s [2024-11-20T08:32:07.930Z] 6752.56 IOPS, 26.38 MiB/s [2024-11-20T08:32:07.930Z] 6779.49 IOPS, 26.48 MiB/s [2024-11-20T08:32:07.930Z] 6809.10 IOPS, 26.60 MiB/s [2024-11-20T08:32:07.930Z] 6832.22 IOPS, 26.69 MiB/s [2024-11-20T08:32:07.930Z] 6857.90 IOPS, 26.79 MiB/s [2024-11-20T08:32:07.930Z] 6881.11 IOPS, 26.88 MiB/s [2024-11-20T08:32:07.930Z] 6903.17 IOPS, 26.97 MiB/s [2024-11-20T08:32:07.930Z] 6924.85 IOPS, 27.05 MiB/s [2024-11-20T08:32:07.930Z] 6944.05 IOPS, 27.13 MiB/s [2024-11-20T08:32:07.930Z] Received shutdown signal, test time was about 56.174127 seconds 00:19:20.369 00:19:20.369 Latency(us) 00:19:20.369 [2024-11-20T08:32:07.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.369 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:20.369 Verification LBA range: start 0x0 length 0x4000 00:19:20.369 Nvme0n1 : 56.17 6944.92 27.13 0.00 0.00 18400.36 1124.54 7046430.72 00:19:20.369 [2024-11-20T08:32:07.930Z] =================================================================================================================== 00:19:20.369 [2024-11-20T08:32:07.930Z] Total : 6944.92 27.13 0.00 0.00 18400.36 1124.54 7046430.72 00:19:20.369 08:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:20.627 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:20.627 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:20.627 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:20.627 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:20.627 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:19:20.627 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:20.627 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:19:20.627 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:20.627 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:20.627 rmmod nvme_tcp 00:19:20.627 rmmod nvme_fabrics 00:19:20.886 rmmod nvme_keyring 00:19:20.886 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:20.886 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:19:20.886 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:19:20.886 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80883 ']' 00:19:20.886 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80883 00:19:20.886 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' -z 80883 ']' 00:19:20.886 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@961 -- # kill -0 80883 00:19:20.886 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # uname 00:19:20.886 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:19:20.886 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 80883 00:19:20.886 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:19:20.886 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:19:20.886 killing process with pid 80883 00:19:20.886 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@975 -- # echo 'killing process with pid 80883' 00:19:20.886 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # kill 80883 00:19:20.886 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@981 -- # wait 80883 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:21.145 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.405 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:19:21.405 00:19:21.405 real 1m2.275s 00:19:21.405 user 2m51.913s 00:19:21.405 sys 0m19.654s 00:19:21.405 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1133 -- # xtrace_disable 00:19:21.405 08:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:21.405 ************************************ 00:19:21.405 END TEST nvmf_host_multipath 00:19:21.405 ************************************ 00:19:21.405 08:32:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:21.405 08:32:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:19:21.405 08:32:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1114 -- # xtrace_disable 00:19:21.405 08:32:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.405 ************************************ 00:19:21.405 START TEST nvmf_timeout 00:19:21.405 ************************************ 00:19:21.405 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:21.405 * Looking for test storage... 00:19:21.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:21.405 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:19:21.405 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1638 -- # lcov --version 00:19:21.405 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:19:21.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.667 --rc genhtml_branch_coverage=1 00:19:21.667 --rc genhtml_function_coverage=1 00:19:21.667 --rc genhtml_legend=1 00:19:21.667 --rc geninfo_all_blocks=1 00:19:21.667 --rc geninfo_unexecuted_blocks=1 00:19:21.667 00:19:21.667 ' 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:19:21.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.667 --rc genhtml_branch_coverage=1 00:19:21.667 --rc genhtml_function_coverage=1 00:19:21.667 --rc genhtml_legend=1 00:19:21.667 --rc geninfo_all_blocks=1 00:19:21.667 --rc geninfo_unexecuted_blocks=1 00:19:21.667 00:19:21.667 ' 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:19:21.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.667 --rc genhtml_branch_coverage=1 00:19:21.667 --rc genhtml_function_coverage=1 00:19:21.667 --rc genhtml_legend=1 00:19:21.667 --rc geninfo_all_blocks=1 00:19:21.667 --rc geninfo_unexecuted_blocks=1 00:19:21.667 00:19:21.667 ' 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:19:21.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.667 --rc genhtml_branch_coverage=1 00:19:21.667 --rc genhtml_function_coverage=1 00:19:21.667 --rc genhtml_legend=1 00:19:21.667 --rc geninfo_all_blocks=1 00:19:21.667 --rc geninfo_unexecuted_blocks=1 00:19:21.667 00:19:21.667 ' 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.667 08:32:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:21.667 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:21.667 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:21.668 Cannot find device "nvmf_init_br" 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:21.668 Cannot find device "nvmf_init_br2" 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:21.668 Cannot find device "nvmf_tgt_br" 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:21.668 Cannot find device "nvmf_tgt_br2" 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:21.668 Cannot find device "nvmf_init_br" 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:21.668 Cannot find device "nvmf_init_br2" 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:21.668 Cannot find device "nvmf_tgt_br" 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:21.668 Cannot find device "nvmf_tgt_br2" 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:21.668 Cannot find device "nvmf_br" 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:21.668 Cannot find device "nvmf_init_if" 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:21.668 Cannot find device "nvmf_init_if2" 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:21.668 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:21.668 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:21.668 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:21.928 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:21.928 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:19:21.928 00:19:21.928 --- 10.0.0.3 ping statistics --- 00:19:21.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.928 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:21.928 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:21.928 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:19:21.928 00:19:21.928 --- 10.0.0.4 ping statistics --- 00:19:21.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.928 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:21.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:21.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:19:21.928 00:19:21.928 --- 10.0.0.1 ping statistics --- 00:19:21.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.928 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:21.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:21.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:19:21.928 00:19:21.928 --- 10.0.0.2 ping statistics --- 00:19:21.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.928 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=82110 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 82110 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # '[' -z 82110 ']' 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@843 -- # local max_retries=100 00:19:21.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@847 -- # xtrace_disable 00:19:21.928 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:21.928 [2024-11-20 08:32:09.473558] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:19:21.928 [2024-11-20 08:32:09.473715] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.186 [2024-11-20 08:32:09.618529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:22.186 [2024-11-20 08:32:09.666473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.186 [2024-11-20 08:32:09.666692] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.186 [2024-11-20 08:32:09.666774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.186 [2024-11-20 08:32:09.666886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.186 [2024-11-20 08:32:09.666965] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.186 [2024-11-20 08:32:09.668152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.186 [2024-11-20 08:32:09.668453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.186 [2024-11-20 08:32:09.720738] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:22.445 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:19:22.445 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@871 -- # return 0 00:19:22.445 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:22.445 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@735 -- # xtrace_disable 00:19:22.445 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:22.445 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.445 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:22.445 08:32:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:22.704 [2024-11-20 08:32:10.117659] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.704 08:32:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:22.962 Malloc0 00:19:22.962 08:32:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:23.221 08:32:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:23.479 08:32:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:23.738 [2024-11-20 08:32:11.168063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:23.738 08:32:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:23.738 08:32:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82152 00:19:23.738 08:32:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82152 /var/tmp/bdevperf.sock 00:19:23.738 08:32:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # '[' -z 82152 ']' 00:19:23.738 08:32:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:23.738 08:32:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@843 -- # local max_retries=100 00:19:23.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:23.738 08:32:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:23.738 08:32:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@847 -- # xtrace_disable 00:19:23.738 08:32:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:23.738 [2024-11-20 08:32:11.228978] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:19:23.739 [2024-11-20 08:32:11.229060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82152 ] 00:19:23.998 [2024-11-20 08:32:11.379344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.998 [2024-11-20 08:32:11.442833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.998 [2024-11-20 08:32:11.503302] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:24.257 08:32:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:19:24.257 08:32:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@871 -- # return 0 00:19:24.258 08:32:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:24.516 08:32:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:24.775 NVMe0n1 00:19:24.775 08:32:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82168 00:19:24.775 08:32:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:24.775 08:32:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:25.038 Running I/O for 10 seconds... 00:19:25.984 08:32:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:26.244 7232.00 IOPS, 28.25 MiB/s [2024-11-20T08:32:13.805Z] [2024-11-20 08:32:13.562209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363b30 is same with the state(6) to be set 00:19:26.244 [2024-11-20 08:32:13.562259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363b30 is same with the state(6) to be set 00:19:26.244 [2024-11-20 08:32:13.562270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363b30 is same with the state(6) to be set 00:19:26.244 [2024-11-20 08:32:13.562873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.244 [2024-11-20 08:32:13.563041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.244 [2024-11-20 08:32:13.563181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.244 [2024-11-20 08:32:13.563298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.244 [2024-11-20 08:32:13.563322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:68912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.244 [2024-11-20 08:32:13.563333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.244 [2024-11-20 08:32:13.563345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.244 [2024-11-20 08:32:13.563368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.244 [2024-11-20 08:32:13.563385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.244 [2024-11-20 08:32:13.563398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.244 [2024-11-20 08:32:13.563410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.244 [2024-11-20 08:32:13.563420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.244 [2024-11-20 08:32:13.563431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.244 [2024-11-20 08:32:13.563441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.244 [2024-11-20 08:32:13.563453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.244 [2024-11-20 08:32:13.563462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.244 [2024-11-20 08:32:13.563474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:69280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.244 [2024-11-20 08:32:13.563484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.244 [2024-11-20 08:32:13.563496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:69288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.244 [2024-11-20 08:32:13.563515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.244 [2024-11-20 08:32:13.563526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.244 [2024-11-20 08:32:13.563536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.244 [2024-11-20 08:32:13.563548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:69304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.244 [2024-11-20 08:32:13.563558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.244 [2024-11-20 08:32:13.563569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.244 [2024-11-20 08:32:13.563579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.244 [2024-11-20 08:32:13.563590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.244 [2024-11-20 08:32:13.563600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.244 [2024-11-20 08:32:13.563627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.244 [2024-11-20 08:32:13.563652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.244 [2024-11-20 08:32:13.563667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:69336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.244 [2024-11-20 08:32:13.563677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.244 [2024-11-20 08:32:13.563689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.244 [2024-11-20 08:32:13.563709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.244 [2024-11-20 08:32:13.563721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.245 [2024-11-20 08:32:13.563730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.563742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.245 [2024-11-20 08:32:13.563752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.563764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.245 [2024-11-20 08:32:13.563773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.563785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.563795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.563820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.563831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.563843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.563853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.563864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:68952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.563901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.563913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.563924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.563935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.563946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.563957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:68976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.563975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.563986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.563995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.564016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.564037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.564063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.564093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:69024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.564122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.564144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.564164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:69048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.564185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.245 [2024-11-20 08:32:13.564205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.245 [2024-11-20 08:32:13.564226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.245 [2024-11-20 08:32:13.564247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.245 [2024-11-20 08:32:13.564268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.245 [2024-11-20 08:32:13.564298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:69416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.245 [2024-11-20 08:32:13.564319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.245 [2024-11-20 08:32:13.564340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.245 [2024-11-20 08:32:13.564360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.564380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:69064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.564401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.564421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.564442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.564463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.564483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.564504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.245 [2024-11-20 08:32:13.564525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.245 [2024-11-20 08:32:13.564546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.245 [2024-11-20 08:32:13.564566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.245 [2024-11-20 08:32:13.564588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:69464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.245 [2024-11-20 08:32:13.564609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:69472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.245 [2024-11-20 08:32:13.564630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.245 [2024-11-20 08:32:13.564641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:69480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.564650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.564661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.564671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.564683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.564692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.564702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.564712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.564723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.564732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.564744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.564753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.564764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:69528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.564774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.564785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.564794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.564819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.564830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.564841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.564850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.564862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.564871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.564882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.564892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.564903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.564913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.564924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.564934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.564946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.564955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.564967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.564976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.564987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.564996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.565008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.565017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.565029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.565038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.565049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.246 [2024-11-20 08:32:13.565058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.565069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.246 [2024-11-20 08:32:13.565078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.565089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.246 [2024-11-20 08:32:13.565099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.565110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.246 [2024-11-20 08:32:13.565119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.565130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.246 [2024-11-20 08:32:13.565139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.565151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.246 [2024-11-20 08:32:13.565160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.565171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.246 [2024-11-20 08:32:13.565180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.565191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:69176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.246 [2024-11-20 08:32:13.565201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.565226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:69632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.565235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.565246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.565256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.565267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.565277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.565289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.565298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.565309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.565319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.565330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.565339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.565351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:69680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.565360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.565371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:69688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.565380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.565392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.565401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.565412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.565425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.565438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.565447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.565458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:69720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.565468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.246 [2024-11-20 08:32:13.565479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.246 [2024-11-20 08:32:13.565489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.565500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.247 [2024-11-20 08:32:13.565509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.565520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:69744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.247 [2024-11-20 08:32:13.565533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.565544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:69752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.247 [2024-11-20 08:32:13.565554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.565566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.247 [2024-11-20 08:32:13.565575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.565587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.247 [2024-11-20 08:32:13.565596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.565607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:69200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.247 [2024-11-20 08:32:13.565617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.565629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:69208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.247 [2024-11-20 08:32:13.565638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.565649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:69216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.247 [2024-11-20 08:32:13.565659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.565670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:69224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.247 [2024-11-20 08:32:13.565680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.565692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.247 [2024-11-20 08:32:13.565701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.565715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ca1d0 is same with the state(6) to be set 00:19:26.247 [2024-11-20 08:32:13.565729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.247 [2024-11-20 08:32:13.565738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.247 [2024-11-20 08:32:13.565746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69240 len:8 PRP1 0x0 PRP2 0x0 00:19:26.247 [2024-11-20 08:32:13.565755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.565765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.247 [2024-11-20 08:32:13.565772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.247 [2024-11-20 08:32:13.565781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69760 len:8 PRP1 0x0 PRP2 0x0 00:19:26.247 [2024-11-20 08:32:13.565790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.565809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.247 [2024-11-20 08:32:13.565819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.247 [2024-11-20 08:32:13.565827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69768 len:8 PRP1 0x0 PRP2 0x0 00:19:26.247 [2024-11-20 08:32:13.565836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.565845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.247 [2024-11-20 08:32:13.565852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.247 [2024-11-20 08:32:13.565866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69776 len:8 PRP1 0x0 PRP2 0x0 00:19:26.247 [2024-11-20 08:32:13.565876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.565885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.247 [2024-11-20 08:32:13.565893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.247 [2024-11-20 08:32:13.565900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69784 len:8 PRP1 0x0 PRP2 0x0 00:19:26.247 [2024-11-20 08:32:13.565909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.565918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.247 [2024-11-20 08:32:13.565925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.247 [2024-11-20 08:32:13.565942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69792 len:8 PRP1 0x0 PRP2 0x0 00:19:26.247 [2024-11-20 08:32:13.565951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.565966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.247 [2024-11-20 08:32:13.565974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.247 [2024-11-20 08:32:13.565981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69800 len:8 PRP1 0x0 PRP2 0x0 00:19:26.247 [2024-11-20 08:32:13.565995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.566004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.247 [2024-11-20 08:32:13.566011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.247 [2024-11-20 08:32:13.566018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69808 len:8 PRP1 0x0 PRP2 0x0 00:19:26.247 [2024-11-20 08:32:13.566027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.566036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.247 [2024-11-20 08:32:13.566044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.247 [2024-11-20 08:32:13.566051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69816 len:8 PRP1 0x0 PRP2 0x0 00:19:26.247 [2024-11-20 08:32:13.566060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.566069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.247 [2024-11-20 08:32:13.566079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.247 [2024-11-20 08:32:13.566090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69824 len:8 PRP1 0x0 PRP2 0x0 00:19:26.247 [2024-11-20 08:32:13.566099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.566108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.247 [2024-11-20 08:32:13.566115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.247 [2024-11-20 08:32:13.566123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69832 len:8 PRP1 0x0 PRP2 0x0 00:19:26.247 [2024-11-20 08:32:13.566132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.566141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.247 [2024-11-20 08:32:13.566148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.247 [2024-11-20 08:32:13.566155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69840 len:8 PRP1 0x0 PRP2 0x0 00:19:26.247 [2024-11-20 08:32:13.566164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.566173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.247 [2024-11-20 08:32:13.566180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.247 [2024-11-20 08:32:13.566188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69848 len:8 PRP1 0x0 PRP2 0x0 00:19:26.247 [2024-11-20 08:32:13.566197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.566207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.247 [2024-11-20 08:32:13.566214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.247 [2024-11-20 08:32:13.566226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69856 len:8 PRP1 0x0 PRP2 0x0 00:19:26.247 [2024-11-20 08:32:13.566236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.566245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.247 [2024-11-20 08:32:13.566252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.247 [2024-11-20 08:32:13.566259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69864 len:8 PRP1 0x0 PRP2 0x0 00:19:26.247 [2024-11-20 08:32:13.566268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.566277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.247 [2024-11-20 08:32:13.566285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.247 [2024-11-20 08:32:13.566302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69872 len:8 PRP1 0x0 PRP2 0x0 00:19:26.247 [2024-11-20 08:32:13.566311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.247 [2024-11-20 08:32:13.566320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.247 [2024-11-20 08:32:13.566327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.247 [2024-11-20 08:32:13.566334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69880 len:8 PRP1 0x0 PRP2 0x0 00:19:26.247 [2024-11-20 08:32:13.566343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.248 [2024-11-20 08:32:13.566352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.248 [2024-11-20 08:32:13.566359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.248 [2024-11-20 08:32:13.566367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69888 len:8 PRP1 0x0 PRP2 0x0 00:19:26.248 [2024-11-20 08:32:13.566376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.248 [2024-11-20 08:32:13.566385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.248 [2024-11-20 08:32:13.566400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.248 [2024-11-20 08:32:13.566409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69896 len:8 PRP1 0x0 PRP2 0x0 00:19:26.248 [2024-11-20 08:32:13.566418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.248 [2024-11-20 08:32:13.566427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.248 [2024-11-20 08:32:13.566434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.248 [2024-11-20 08:32:13.566441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69904 len:8 PRP1 0x0 PRP2 0x0 00:19:26.248 [2024-11-20 08:32:13.566450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.248 [2024-11-20 08:32:13.566459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.248 [2024-11-20 08:32:13.566466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.248 [2024-11-20 08:32:13.566474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69912 len:8 PRP1 0x0 PRP2 0x0 00:19:26.248 [2024-11-20 08:32:13.566482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.248 [2024-11-20 08:32:13.566616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:26.248 [2024-11-20 08:32:13.566640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.248 [2024-11-20 08:32:13.566657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:26.248 [2024-11-20 08:32:13.566666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.248 [2024-11-20 08:32:13.566676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:26.248 [2024-11-20 08:32:13.566685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.248 [2024-11-20 08:32:13.566695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:26.248 [2024-11-20 08:32:13.566704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.248 [2024-11-20 08:32:13.566713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ce50 is same with the state(6) to be set 00:19:26.248 [2024-11-20 08:32:13.566955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:26.248 [2024-11-20 08:32:13.566979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x135ce50 (9): Bad file descriptor 00:19:26.248 [2024-11-20 08:32:13.567076] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:26.248 [2024-11-20 08:32:13.567097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x135ce50 with addr=10.0.0.3, port=4420 00:19:26.248 [2024-11-20 08:32:13.567108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ce50 is same with the state(6) to be set 00:19:26.248 [2024-11-20 08:32:13.567126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x135ce50 (9): Bad file descriptor 00:19:26.248 [2024-11-20 08:32:13.567141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:26.248 [2024-11-20 08:32:13.567150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:26.248 [2024-11-20 08:32:13.567160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:26.248 [2024-11-20 08:32:13.567171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:26.248 [2024-11-20 08:32:13.567182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:26.248 08:32:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:28.122 4306.00 IOPS, 16.82 MiB/s [2024-11-20T08:32:15.683Z] 2870.67 IOPS, 11.21 MiB/s [2024-11-20T08:32:15.683Z] [2024-11-20 08:32:15.567566] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:28.122 [2024-11-20 08:32:15.567652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x135ce50 with addr=10.0.0.3, port=4420 00:19:28.122 [2024-11-20 08:32:15.567669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ce50 is same with the state(6) to be set 00:19:28.122 [2024-11-20 08:32:15.567696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x135ce50 (9): Bad file descriptor 00:19:28.122 [2024-11-20 08:32:15.567729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:28.122 [2024-11-20 08:32:15.567743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:28.122 [2024-11-20 08:32:15.567754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:28.122 [2024-11-20 08:32:15.567766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:28.122 [2024-11-20 08:32:15.567777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:28.122 08:32:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:28.122 08:32:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:28.122 08:32:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:28.381 08:32:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:28.381 08:32:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:28.381 08:32:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:28.381 08:32:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:28.640 08:32:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:28.640 08:32:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:30.276 2153.00 IOPS, 8.41 MiB/s [2024-11-20T08:32:17.837Z] 1722.40 IOPS, 6.73 MiB/s [2024-11-20T08:32:17.837Z] [2024-11-20 08:32:17.568022] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.276 [2024-11-20 08:32:17.568254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x135ce50 with addr=10.0.0.3, port=4420 00:19:30.276 [2024-11-20 08:32:17.568279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135ce50 is same with the state(6) to be set 00:19:30.276 [2024-11-20 08:32:17.568305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x135ce50 (9): Bad file descriptor 00:19:30.276 [2024-11-20 08:32:17.568326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:30.276 [2024-11-20 08:32:17.568338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:30.276 [2024-11-20 08:32:17.568349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:30.276 [2024-11-20 08:32:17.568361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:30.276 [2024-11-20 08:32:17.568384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:32.224 1435.33 IOPS, 5.61 MiB/s [2024-11-20T08:32:19.785Z] 1230.29 IOPS, 4.81 MiB/s [2024-11-20T08:32:19.785Z] [2024-11-20 08:32:19.568474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:32.224 [2024-11-20 08:32:19.568542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:32.224 [2024-11-20 08:32:19.568555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:32.224 [2024-11-20 08:32:19.568566] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:19:32.224 [2024-11-20 08:32:19.568578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:33.159 1076.50 IOPS, 4.21 MiB/s 00:19:33.159 Latency(us) 00:19:33.159 [2024-11-20T08:32:20.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.159 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:33.159 Verification LBA range: start 0x0 length 0x4000 00:19:33.159 NVMe0n1 : 8.18 1053.25 4.11 15.65 0.00 119544.48 3470.43 7015926.69 00:19:33.159 [2024-11-20T08:32:20.720Z] =================================================================================================================== 00:19:33.159 [2024-11-20T08:32:20.720Z] Total : 1053.25 4.11 15.65 0.00 119544.48 3470.43 7015926.69 00:19:33.159 { 00:19:33.159 "results": [ 00:19:33.159 { 00:19:33.159 "job": "NVMe0n1", 00:19:33.159 "core_mask": "0x4", 00:19:33.159 "workload": "verify", 00:19:33.159 "status": "finished", 00:19:33.159 "verify_range": { 00:19:33.159 "start": 0, 00:19:33.159 "length": 16384 00:19:33.159 }, 00:19:33.159 "queue_depth": 128, 00:19:33.159 "io_size": 4096, 00:19:33.159 "runtime": 8.176579, 00:19:33.159 "iops": 1053.2522219867258, 00:19:33.159 "mibps": 4.1142664921356475, 00:19:33.159 "io_failed": 128, 00:19:33.159 "io_timeout": 0, 00:19:33.159 "avg_latency_us": 119544.47816933639, 00:19:33.159 "min_latency_us": 3470.429090909091, 00:19:33.159 "max_latency_us": 7015926.69090909 00:19:33.159 } 00:19:33.159 ], 00:19:33.159 "core_count": 1 00:19:33.159 } 00:19:33.726 08:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:33.726 08:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:33.726 08:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:33.985 08:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:33.985 08:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:33.985 08:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:33.985 08:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:34.244 08:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:34.244 08:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82168 00:19:34.244 08:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82152 00:19:34.244 08:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' -z 82152 ']' 00:19:34.244 08:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@961 -- # kill -0 82152 00:19:34.244 08:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # uname 00:19:34.244 08:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:19:34.244 08:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 82152 00:19:34.244 killing process with pid 82152 00:19:34.244 Received shutdown signal, test time was about 9.398411 seconds 00:19:34.244 00:19:34.244 Latency(us) 00:19:34.244 [2024-11-20T08:32:21.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.244 [2024-11-20T08:32:21.805Z] =================================================================================================================== 00:19:34.244 [2024-11-20T08:32:21.805Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:34.244 08:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@963 -- # process_name=reactor_2 00:19:34.244 08:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # '[' reactor_2 = sudo ']' 00:19:34.244 08:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@975 -- # echo 'killing process with pid 82152' 00:19:34.244 08:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # kill 82152 00:19:34.244 08:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@981 -- # wait 82152 00:19:34.502 08:32:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:34.761 [2024-11-20 08:32:22.214739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:34.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:34.761 08:32:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82285 00:19:34.761 08:32:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:34.761 08:32:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82285 /var/tmp/bdevperf.sock 00:19:34.761 08:32:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # '[' -z 82285 ']' 00:19:34.761 08:32:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:34.761 08:32:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@843 -- # local max_retries=100 00:19:34.761 08:32:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:34.761 08:32:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@847 -- # xtrace_disable 00:19:34.761 08:32:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:34.761 [2024-11-20 08:32:22.279503] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:19:34.761 [2024-11-20 08:32:22.279745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82285 ] 00:19:35.020 [2024-11-20 08:32:22.424816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.020 [2024-11-20 08:32:22.482919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.020 [2024-11-20 08:32:22.539642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:35.279 08:32:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:19:35.279 08:32:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@871 -- # return 0 00:19:35.279 08:32:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:35.538 08:32:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:35.813 NVMe0n1 00:19:35.813 08:32:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82301 00:19:35.813 08:32:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:35.814 08:32:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:35.814 Running I/O for 10 seconds... 00:19:36.766 08:32:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:37.027 7888.00 IOPS, 30.81 MiB/s [2024-11-20T08:32:24.588Z] [2024-11-20 08:32:24.464223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.027 [2024-11-20 08:32:24.464456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.464489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.027 [2024-11-20 08:32:24.464502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.464524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.027 [2024-11-20 08:32:24.464534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.464546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.027 [2024-11-20 08:32:24.464556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.464568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.027 [2024-11-20 08:32:24.464578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.464590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.027 [2024-11-20 08:32:24.464600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.464612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.027 [2024-11-20 08:32:24.464622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.464634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.027 [2024-11-20 08:32:24.464644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.464656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.027 [2024-11-20 08:32:24.464665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.464677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.027 [2024-11-20 08:32:24.464687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.464699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.027 [2024-11-20 08:32:24.464709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.464721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.027 [2024-11-20 08:32:24.464731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.464743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.027 [2024-11-20 08:32:24.464752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.464764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.027 [2024-11-20 08:32:24.464781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.464793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.027 [2024-11-20 08:32:24.464823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.464837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.027 [2024-11-20 08:32:24.464847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.464860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.027 [2024-11-20 08:32:24.464869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.464884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.027 [2024-11-20 08:32:24.464894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.464906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.027 [2024-11-20 08:32:24.464916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.464928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.027 [2024-11-20 08:32:24.464938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.464950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.027 [2024-11-20 08:32:24.464960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.464971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.027 [2024-11-20 08:32:24.464981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.464993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.027 [2024-11-20 08:32:24.465002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.465014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.027 [2024-11-20 08:32:24.465024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.027 [2024-11-20 08:32:24.465035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.028 [2024-11-20 08:32:24.465045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.028 [2024-11-20 08:32:24.465071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.028 [2024-11-20 08:32:24.465094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.028 [2024-11-20 08:32:24.465116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.028 [2024-11-20 08:32:24.465137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.028 [2024-11-20 08:32:24.465158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.028 [2024-11-20 08:32:24.465180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.028 [2024-11-20 08:32:24.465201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.028 [2024-11-20 08:32:24.465223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.028 [2024-11-20 08:32:24.465245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.028 [2024-11-20 08:32:24.465267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.028 [2024-11-20 08:32:24.465289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.028 [2024-11-20 08:32:24.465310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.028 [2024-11-20 08:32:24.465331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.028 [2024-11-20 08:32:24.465354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.028 [2024-11-20 08:32:24.465375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.028 [2024-11-20 08:32:24.465396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.028 [2024-11-20 08:32:24.465418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.028 [2024-11-20 08:32:24.465439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.028 [2024-11-20 08:32:24.465460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.028 [2024-11-20 08:32:24.465481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.028 [2024-11-20 08:32:24.465502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.028 [2024-11-20 08:32:24.465524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.028 [2024-11-20 08:32:24.465546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.028 [2024-11-20 08:32:24.465567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.028 [2024-11-20 08:32:24.465594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.028 [2024-11-20 08:32:24.465616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.028 [2024-11-20 08:32:24.465639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.028 [2024-11-20 08:32:24.465660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.028 [2024-11-20 08:32:24.465682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.028 [2024-11-20 08:32:24.465703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.028 [2024-11-20 08:32:24.465714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.028 [2024-11-20 08:32:24.465724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.465736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.029 [2024-11-20 08:32:24.465746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.465757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.029 [2024-11-20 08:32:24.465767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.465779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.029 [2024-11-20 08:32:24.465788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.465810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.029 [2024-11-20 08:32:24.465821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.465833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.029 [2024-11-20 08:32:24.465843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.465864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.029 [2024-11-20 08:32:24.465874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.465885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.029 [2024-11-20 08:32:24.465895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.465907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.029 [2024-11-20 08:32:24.465918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.465930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.029 [2024-11-20 08:32:24.465940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.465952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.029 [2024-11-20 08:32:24.465962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.465973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.029 [2024-11-20 08:32:24.465983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.465995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.029 [2024-11-20 08:32:24.466005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.466016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.029 [2024-11-20 08:32:24.466026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.466038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.029 [2024-11-20 08:32:24.466048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.466059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.029 [2024-11-20 08:32:24.466069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.466081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.029 [2024-11-20 08:32:24.466090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.466105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.029 [2024-11-20 08:32:24.466121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.466137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.029 [2024-11-20 08:32:24.466147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.466167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.029 [2024-11-20 08:32:24.466177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.466190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.029 [2024-11-20 08:32:24.466200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.466213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.029 [2024-11-20 08:32:24.466235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.466247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.029 [2024-11-20 08:32:24.466256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.466268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.029 [2024-11-20 08:32:24.466278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.466289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.029 [2024-11-20 08:32:24.466299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.466311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.029 [2024-11-20 08:32:24.466321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.466333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.029 [2024-11-20 08:32:24.466343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.466355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.029 [2024-11-20 08:32:24.466365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.466376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.029 [2024-11-20 08:32:24.466386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.466398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.029 [2024-11-20 08:32:24.466407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-11-20 08:32:24.466419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.030 [2024-11-20 08:32:24.466429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.030 [2024-11-20 08:32:24.466450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.030 [2024-11-20 08:32:24.466471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.030 [2024-11-20 08:32:24.466496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.030 [2024-11-20 08:32:24.466518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.030 [2024-11-20 08:32:24.466539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.030 [2024-11-20 08:32:24.466568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.030 [2024-11-20 08:32:24.466590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.030 [2024-11-20 08:32:24.466612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.030 [2024-11-20 08:32:24.466644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.030 [2024-11-20 08:32:24.466665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.030 [2024-11-20 08:32:24.466687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.030 [2024-11-20 08:32:24.466716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.030 [2024-11-20 08:32:24.466738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.030 [2024-11-20 08:32:24.466759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.030 [2024-11-20 08:32:24.466781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.030 [2024-11-20 08:32:24.466813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.030 [2024-11-20 08:32:24.466836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:37.030 [2024-11-20 08:32:24.466858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.030 [2024-11-20 08:32:24.466879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.030 [2024-11-20 08:32:24.466900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.030 [2024-11-20 08:32:24.466938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.030 [2024-11-20 08:32:24.466959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.030 [2024-11-20 08:32:24.466980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.466992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.030 [2024-11-20 08:32:24.467002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.467013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.030 [2024-11-20 08:32:24.467022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.467033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5b1d0 is same with the state(6) to be set 00:19:37.030 [2024-11-20 08:32:24.467047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:37.030 [2024-11-20 08:32:24.467055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:37.030 [2024-11-20 08:32:24.467064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77368 len:8 PRP1 0x0 PRP2 0x0 00:19:37.030 [2024-11-20 08:32:24.467079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.467091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:37.030 [2024-11-20 08:32:24.467099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:37.030 [2024-11-20 08:32:24.467108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77824 len:8 PRP1 0x0 PRP2 0x0 00:19:37.030 [2024-11-20 08:32:24.467117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.467127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:37.030 [2024-11-20 08:32:24.467135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:37.030 [2024-11-20 08:32:24.467143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77832 len:8 PRP1 0x0 PRP2 0x0 00:19:37.030 [2024-11-20 08:32:24.467152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.030 [2024-11-20 08:32:24.467162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:37.031 [2024-11-20 08:32:24.467169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:37.031 [2024-11-20 08:32:24.467178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77840 len:8 PRP1 0x0 PRP2 0x0 00:19:37.031 [2024-11-20 08:32:24.467187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.031 [2024-11-20 08:32:24.467197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:37.031 [2024-11-20 08:32:24.467205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:37.031 [2024-11-20 08:32:24.467213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77848 len:8 PRP1 0x0 PRP2 0x0 00:19:37.031 [2024-11-20 08:32:24.467222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.031 [2024-11-20 08:32:24.467232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:37.031 [2024-11-20 08:32:24.467239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:37.031 [2024-11-20 08:32:24.467252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77856 len:8 PRP1 0x0 PRP2 0x0 00:19:37.031 [2024-11-20 08:32:24.467266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.031 [2024-11-20 08:32:24.467276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:37.031 [2024-11-20 08:32:24.467284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:37.031 [2024-11-20 08:32:24.467292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77864 len:8 PRP1 0x0 PRP2 0x0 00:19:37.031 [2024-11-20 08:32:24.467301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.031 [2024-11-20 08:32:24.467311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:37.031 [2024-11-20 08:32:24.467319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:37.031 [2024-11-20 08:32:24.467327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77872 len:8 PRP1 0x0 PRP2 0x0 00:19:37.031 [2024-11-20 08:32:24.467336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.031 [2024-11-20 08:32:24.467346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:37.031 [2024-11-20 08:32:24.467353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:37.031 [2024-11-20 08:32:24.467361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77880 len:8 PRP1 0x0 PRP2 0x0 00:19:37.031 [2024-11-20 08:32:24.467374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.031 [2024-11-20 08:32:24.467385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:37.031 [2024-11-20 08:32:24.467393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:37.031 [2024-11-20 08:32:24.467401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77888 len:8 PRP1 0x0 PRP2 0x0 00:19:37.031 [2024-11-20 08:32:24.467410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.031 [2024-11-20 08:32:24.467420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:37.031 [2024-11-20 08:32:24.467427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:37.031 [2024-11-20 08:32:24.467435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77896 len:8 PRP1 0x0 PRP2 0x0 00:19:37.031 [2024-11-20 08:32:24.467444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.031 [2024-11-20 08:32:24.467454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:37.031 [2024-11-20 08:32:24.467461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:37.031 [2024-11-20 08:32:24.467469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77904 len:8 PRP1 0x0 PRP2 0x0 00:19:37.031 [2024-11-20 08:32:24.467478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.031 [2024-11-20 08:32:24.467488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:37.031 [2024-11-20 08:32:24.467495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:37.031 [2024-11-20 08:32:24.467503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77912 len:8 PRP1 0x0 PRP2 0x0 00:19:37.031 [2024-11-20 08:32:24.467523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.031 [2024-11-20 08:32:24.467533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:37.031 [2024-11-20 08:32:24.467540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:37.031 [2024-11-20 08:32:24.467552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77920 len:8 PRP1 0x0 PRP2 0x0 00:19:37.031 [2024-11-20 08:32:24.467561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.031 [2024-11-20 08:32:24.467571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:37.031 [2024-11-20 08:32:24.467581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:37.031 [2024-11-20 08:32:24.467595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77928 len:8 PRP1 0x0 PRP2 0x0 00:19:37.031 [2024-11-20 08:32:24.467619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.031 [2024-11-20 08:32:24.467635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:37.031 [2024-11-20 08:32:24.467653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:37.031 [2024-11-20 08:32:24.467662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77936 len:8 PRP1 0x0 PRP2 0x0 00:19:37.031 [2024-11-20 08:32:24.467671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.031 [2024-11-20 08:32:24.467681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:37.031 [2024-11-20 08:32:24.467689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:37.031 [2024-11-20 08:32:24.467698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77944 len:8 PRP1 0x0 PRP2 0x0 00:19:37.031 [2024-11-20 08:32:24.467713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.031 [2024-11-20 08:32:24.468338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.031 [2024-11-20 08:32:24.468763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.031 [2024-11-20 08:32:24.468913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.031 [2024-11-20 08:32:24.469048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.031 [2024-11-20 08:32:24.469171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.031 [2024-11-20 08:32:24.469282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.031 [2024-11-20 08:32:24.469344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.031 [2024-11-20 08:32:24.469496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.031 [2024-11-20 08:32:24.469627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbede50 is same with the state(6) to be set 00:19:37.031 [2024-11-20 08:32:24.469989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:37.031 [2024-11-20 08:32:24.470135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbede50 (9): Bad file descriptor 00:19:37.031 [2024-11-20 08:32:24.470382] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:37.031 [2024-11-20 08:32:24.470495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbede50 with addr=10.0.0.3, port=4420 00:19:37.031 [2024-11-20 08:32:24.470630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbede50 is same with the state(6) to be set 00:19:37.031 [2024-11-20 08:32:24.470772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbede50 (9): Bad file descriptor 00:19:37.031 [2024-11-20 08:32:24.470950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:37.031 [2024-11-20 08:32:24.471082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:37.031 [2024-11-20 08:32:24.471171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:37.031 [2024-11-20 08:32:24.471265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:37.031 [2024-11-20 08:32:24.471399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:37.031 08:32:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:37.968 4808.00 IOPS, 18.78 MiB/s [2024-11-20T08:32:25.529Z] [2024-11-20 08:32:25.471677] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:37.968 [2024-11-20 08:32:25.471879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbede50 with addr=10.0.0.3, port=4420 00:19:37.968 [2024-11-20 08:32:25.472098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbede50 is same with the state(6) to be set 00:19:37.968 [2024-11-20 08:32:25.472137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbede50 (9): Bad file descriptor 00:19:37.968 [2024-11-20 08:32:25.472160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:37.968 [2024-11-20 08:32:25.472171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:37.968 [2024-11-20 08:32:25.472182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:37.968 [2024-11-20 08:32:25.472194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:37.968 [2024-11-20 08:32:25.472207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:37.968 08:32:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:38.226 [2024-11-20 08:32:25.729454] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:38.226 08:32:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82301 00:19:39.047 3205.33 IOPS, 12.52 MiB/s [2024-11-20T08:32:26.608Z] [2024-11-20 08:32:26.484412] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:40.917 2404.00 IOPS, 9.39 MiB/s [2024-11-20T08:32:29.410Z] 3386.60 IOPS, 13.23 MiB/s [2024-11-20T08:32:30.344Z] 4410.83 IOPS, 17.23 MiB/s [2024-11-20T08:32:31.723Z] 5207.00 IOPS, 20.34 MiB/s [2024-11-20T08:32:32.294Z] 5737.12 IOPS, 22.41 MiB/s [2024-11-20T08:32:33.670Z] 6151.22 IOPS, 24.03 MiB/s [2024-11-20T08:32:33.670Z] 6483.30 IOPS, 25.33 MiB/s 00:19:46.109 Latency(us) 00:19:46.109 [2024-11-20T08:32:33.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.109 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:46.109 Verification LBA range: start 0x0 length 0x4000 00:19:46.109 NVMe0n1 : 10.01 6485.86 25.34 0.00 0.00 19689.03 1638.40 3019898.88 00:19:46.109 [2024-11-20T08:32:33.670Z] =================================================================================================================== 00:19:46.109 [2024-11-20T08:32:33.670Z] Total : 6485.86 25.34 0.00 0.00 19689.03 1638.40 3019898.88 00:19:46.109 { 00:19:46.109 "results": [ 00:19:46.109 { 00:19:46.109 "job": "NVMe0n1", 00:19:46.109 "core_mask": "0x4", 00:19:46.109 "workload": "verify", 00:19:46.109 "status": "finished", 00:19:46.109 "verify_range": { 00:19:46.109 "start": 0, 00:19:46.109 "length": 16384 00:19:46.109 }, 00:19:46.109 "queue_depth": 128, 00:19:46.109 "io_size": 4096, 00:19:46.109 "runtime": 10.008388, 00:19:46.109 "iops": 6485.859660916423, 00:19:46.109 "mibps": 25.33538930045478, 00:19:46.109 "io_failed": 0, 00:19:46.109 "io_timeout": 0, 00:19:46.109 "avg_latency_us": 19689.02949228548, 00:19:46.109 "min_latency_us": 1638.4, 00:19:46.109 "max_latency_us": 3019898.88 00:19:46.109 } 00:19:46.109 ], 00:19:46.109 "core_count": 1 00:19:46.109 } 00:19:46.109 08:32:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82410 00:19:46.110 08:32:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:46.110 08:32:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:46.110 Running I/O for 10 seconds... 00:19:47.050 08:32:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:47.050 7188.00 IOPS, 28.08 MiB/s [2024-11-20T08:32:34.611Z] [2024-11-20 08:32:34.568373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.050 [2024-11-20 08:32:34.568604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.050 [2024-11-20 08:32:34.568777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.050 [2024-11-20 08:32:34.568973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.050 [2024-11-20 08:32:34.569106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.050 [2024-11-20 08:32:34.569277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.050 [2024-11-20 08:32:34.569439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.050 [2024-11-20 08:32:34.569589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.050 [2024-11-20 08:32:34.569748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.050 [2024-11-20 08:32:34.569895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.050 [2024-11-20 08:32:34.570045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.050 [2024-11-20 08:32:34.570192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.050 [2024-11-20 08:32:34.570304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.050 [2024-11-20 08:32:34.570386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.050 [2024-11-20 08:32:34.570495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.050 [2024-11-20 08:32:34.570631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.050 [2024-11-20 08:32:34.570762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.050 [2024-11-20 08:32:34.570892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.050 [2024-11-20 08:32:34.570911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.050 [2024-11-20 08:32:34.570922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.050 [2024-11-20 08:32:34.570934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.050 [2024-11-20 08:32:34.570944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.050 [2024-11-20 08:32:34.570955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.050 [2024-11-20 08:32:34.570964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.050 [2024-11-20 08:32:34.570976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.050 [2024-11-20 08:32:34.570986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.050 [2024-11-20 08:32:34.571007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.050 [2024-11-20 08:32:34.571016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.050 [2024-11-20 08:32:34.571028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.050 [2024-11-20 08:32:34.571037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.050 [2024-11-20 08:32:34.571048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.050 [2024-11-20 08:32:34.571058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.050 [2024-11-20 08:32:34.571069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.050 [2024-11-20 08:32:34.571079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.050 [2024-11-20 08:32:34.571093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.050 [2024-11-20 08:32:34.571103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.050 [2024-11-20 08:32:34.571114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.050 [2024-11-20 08:32:34.571124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.050 [2024-11-20 08:32:34.571135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.050 [2024-11-20 08:32:34.571145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.050 [2024-11-20 08:32:34.571156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.050 [2024-11-20 08:32:34.571180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.050 [2024-11-20 08:32:34.571191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.050 [2024-11-20 08:32:34.571201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.050 [2024-11-20 08:32:34.571213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.050 [2024-11-20 08:32:34.571226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.050 [2024-11-20 08:32:34.571236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.050 [2024-11-20 08:32:34.571246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.571988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.571997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.572008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.572017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.572028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.572037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.572048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.572060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.572070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.572079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.572091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.572101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.572112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.572121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.572131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.572141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.572152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.051 [2024-11-20 08:32:34.572177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.051 [2024-11-20 08:32:34.572188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.572980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.572989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.573001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.573010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.573021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.573031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.573042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.052 [2024-11-20 08:32:34.573060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.052 [2024-11-20 08:32:34.573074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.053 [2024-11-20 08:32:34.573083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.053 [2024-11-20 08:32:34.573104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.053 [2024-11-20 08:32:34.573124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.053 [2024-11-20 08:32:34.573145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.053 [2024-11-20 08:32:34.573166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.053 [2024-11-20 08:32:34.573187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.053 [2024-11-20 08:32:34.573217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.053 [2024-11-20 08:32:34.573237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.053 [2024-11-20 08:32:34.573258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.053 [2024-11-20 08:32:34.573279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.053 [2024-11-20 08:32:34.573300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.053 [2024-11-20 08:32:34.573321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.053 [2024-11-20 08:32:34.573342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.053 [2024-11-20 08:32:34.573362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.053 [2024-11-20 08:32:34.573383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.053 [2024-11-20 08:32:34.573404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.053 [2024-11-20 08:32:34.573425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.053 [2024-11-20 08:32:34.573445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.053 [2024-11-20 08:32:34.573466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.053 [2024-11-20 08:32:34.573487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.053 [2024-11-20 08:32:34.573507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.053 [2024-11-20 08:32:34.573528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.053 [2024-11-20 08:32:34.573548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.053 [2024-11-20 08:32:34.573568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5c290 is same with the state(6) to be set 00:19:47.053 [2024-11-20 08:32:34.573592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:47.053 [2024-11-20 08:32:34.573600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:47.053 [2024-11-20 08:32:34.573609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65688 len:8 PRP1 0x0 PRP2 0x0 00:19:47.053 [2024-11-20 08:32:34.573619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.053 [2024-11-20 08:32:34.573770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.053 [2024-11-20 08:32:34.573790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.053 [2024-11-20 08:32:34.573827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.053 [2024-11-20 08:32:34.573847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.053 [2024-11-20 08:32:34.573856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbede50 is same with the state(6) to be set 00:19:47.053 [2024-11-20 08:32:34.574085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:47.053 [2024-11-20 08:32:34.574108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbede50 (9): Bad file descriptor 00:19:47.053 [2024-11-20 08:32:34.574197] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:47.053 [2024-11-20 08:32:34.574234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbede50 with addr=10.0.0.3, port=4420 00:19:47.053 [2024-11-20 08:32:34.574247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbede50 is same with the state(6) to be set 00:19:47.053 [2024-11-20 08:32:34.574266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbede50 (9): Bad file descriptor 00:19:47.053 [2024-11-20 08:32:34.574282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:47.053 [2024-11-20 08:32:34.574292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:47.053 [2024-11-20 08:32:34.574302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:47.053 [2024-11-20 08:32:34.574313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:47.053 [2024-11-20 08:32:34.574325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:47.053 08:32:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:48.250 4042.00 IOPS, 15.79 MiB/s [2024-11-20T08:32:35.811Z] [2024-11-20 08:32:35.574466] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:48.250 [2024-11-20 08:32:35.574747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbede50 with addr=10.0.0.3, port=4420 00:19:48.250 [2024-11-20 08:32:35.574926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbede50 is same with the state(6) to be set 00:19:48.250 [2024-11-20 08:32:35.575098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbede50 (9): Bad file descriptor 00:19:48.250 [2024-11-20 08:32:35.575230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:48.250 [2024-11-20 08:32:35.575295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:48.250 [2024-11-20 08:32:35.575410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:48.250 [2024-11-20 08:32:35.575449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:48.250 [2024-11-20 08:32:35.575756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:49.186 2694.67 IOPS, 10.53 MiB/s [2024-11-20T08:32:36.747Z] [2024-11-20 08:32:36.575956] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:49.186 [2024-11-20 08:32:36.577757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbede50 with addr=10.0.0.3, port=4420 00:19:49.186 [2024-11-20 08:32:36.577790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbede50 is same with the state(6) to be set 00:19:49.186 [2024-11-20 08:32:36.577838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbede50 (9): Bad file descriptor 00:19:49.186 [2024-11-20 08:32:36.577861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:49.186 [2024-11-20 08:32:36.577873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:49.186 [2024-11-20 08:32:36.577884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:49.186 [2024-11-20 08:32:36.577897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:49.186 [2024-11-20 08:32:36.577910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:50.122 2021.00 IOPS, 7.89 MiB/s [2024-11-20T08:32:37.683Z] [2024-11-20 08:32:37.579007] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:50.122 [2024-11-20 08:32:37.579081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbede50 with addr=10.0.0.3, port=4420 00:19:50.122 [2024-11-20 08:32:37.579099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbede50 is same with the state(6) to be set 00:19:50.122 [2024-11-20 08:32:37.579357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbede50 (9): Bad file descriptor 00:19:50.122 [2024-11-20 08:32:37.579602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:50.122 [2024-11-20 08:32:37.579627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:50.122 [2024-11-20 08:32:37.579639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:50.122 [2024-11-20 08:32:37.579651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:50.122 [2024-11-20 08:32:37.579663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:50.122 08:32:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:50.381 [2024-11-20 08:32:37.898744] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:50.381 08:32:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82410 00:19:51.208 1616.80 IOPS, 6.32 MiB/s [2024-11-20T08:32:38.770Z] [2024-11-20 08:32:38.605605] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:19:53.113 2665.83 IOPS, 10.41 MiB/s [2024-11-20T08:32:41.612Z] 3591.29 IOPS, 14.03 MiB/s [2024-11-20T08:32:42.548Z] 4255.38 IOPS, 16.62 MiB/s [2024-11-20T08:32:43.484Z] 4773.44 IOPS, 18.65 MiB/s [2024-11-20T08:32:43.484Z] 5178.30 IOPS, 20.23 MiB/s 00:19:55.923 Latency(us) 00:19:55.923 [2024-11-20T08:32:43.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.923 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:55.923 Verification LBA range: start 0x0 length 0x4000 00:19:55.923 NVMe0n1 : 10.01 5184.73 20.25 3636.73 0.00 14478.09 670.25 3019898.88 00:19:55.923 [2024-11-20T08:32:43.484Z] =================================================================================================================== 00:19:55.923 [2024-11-20T08:32:43.484Z] Total : 5184.73 20.25 3636.73 0.00 14478.09 0.00 3019898.88 00:19:55.923 { 00:19:55.923 "results": [ 00:19:55.923 { 00:19:55.923 "job": "NVMe0n1", 00:19:55.923 "core_mask": "0x4", 00:19:55.923 "workload": "verify", 00:19:55.923 "status": "finished", 00:19:55.923 "verify_range": { 00:19:55.924 "start": 0, 00:19:55.924 "length": 16384 00:19:55.924 }, 00:19:55.924 "queue_depth": 128, 00:19:55.924 "io_size": 4096, 00:19:55.924 "runtime": 10.008428, 00:19:55.924 "iops": 5184.730309295326, 00:19:55.924 "mibps": 20.252852770684868, 00:19:55.924 "io_failed": 36398, 00:19:55.924 "io_timeout": 0, 00:19:55.924 "avg_latency_us": 14478.089967925582, 00:19:55.924 "min_latency_us": 670.2545454545455, 00:19:55.924 "max_latency_us": 3019898.88 00:19:55.924 } 00:19:55.924 ], 00:19:55.924 "core_count": 1 00:19:55.924 } 00:19:55.924 08:32:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82285 00:19:55.924 08:32:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' -z 82285 ']' 00:19:55.924 08:32:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@961 -- # kill -0 82285 00:19:55.924 08:32:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # uname 00:19:55.924 08:32:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:19:55.924 08:32:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 82285 00:19:56.183 killing process with pid 82285 00:19:56.183 Received shutdown signal, test time was about 10.000000 seconds 00:19:56.183 00:19:56.183 Latency(us) 00:19:56.183 [2024-11-20T08:32:43.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.183 [2024-11-20T08:32:43.744Z] =================================================================================================================== 00:19:56.183 [2024-11-20T08:32:43.744Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:56.183 08:32:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@963 -- # process_name=reactor_2 00:19:56.183 08:32:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # '[' reactor_2 = sudo ']' 00:19:56.183 08:32:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@975 -- # echo 'killing process with pid 82285' 00:19:56.183 08:32:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # kill 82285 00:19:56.183 08:32:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@981 -- # wait 82285 00:19:56.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.183 08:32:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82521 00:19:56.183 08:32:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:56.183 08:32:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82521 /var/tmp/bdevperf.sock 00:19:56.183 08:32:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # '[' -z 82521 ']' 00:19:56.183 08:32:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.183 08:32:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@843 -- # local max_retries=100 00:19:56.183 08:32:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.183 08:32:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@847 -- # xtrace_disable 00:19:56.183 08:32:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:56.442 [2024-11-20 08:32:43.798895] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:19:56.442 [2024-11-20 08:32:43.799292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82521 ] 00:19:56.442 [2024-11-20 08:32:43.957951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.701 [2024-11-20 08:32:44.017051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.701 [2024-11-20 08:32:44.072216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:57.269 08:32:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:19:57.269 08:32:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@871 -- # return 0 00:19:57.269 08:32:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82537 00:19:57.269 08:32:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82521 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:57.269 08:32:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:57.528 08:32:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:58.096 NVMe0n1 00:19:58.096 08:32:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82579 00:19:58.096 08:32:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:58.096 08:32:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:58.096 Running I/O for 10 seconds... 00:19:59.034 08:32:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:59.296 14732.00 IOPS, 57.55 MiB/s [2024-11-20T08:32:46.857Z] [2024-11-20 08:32:46.676692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.676968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.677987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.677998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.678009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.678020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.678031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.678042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.678053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.678063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.678075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:56184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.678085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.678097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.678107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.678120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.678131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.678143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.678163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.678175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.678186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.678197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.296 [2024-11-20 08:32:46.678208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.296 [2024-11-20 08:32:46.678219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:123976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:129696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.678982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.678993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:116376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.679004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.679016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.679033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.679045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.679055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.679067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.679077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.679089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.679099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.679110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.679121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.297 [2024-11-20 08:32:46.679132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.297 [2024-11-20 08:32:46.679143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:41568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.679987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.679999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.680009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.680021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.680031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.680043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.680053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.680075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:26544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.680085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.680097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.680111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.680122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.298 [2024-11-20 08:32:46.680133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.298 [2024-11-20 08:32:46.680144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.299 [2024-11-20 08:32:46.680155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.299 [2024-11-20 08:32:46.680167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.299 [2024-11-20 08:32:46.680177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.299 [2024-11-20 08:32:46.680189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.299 [2024-11-20 08:32:46.680199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.299 [2024-11-20 08:32:46.680213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.299 [2024-11-20 08:32:46.680223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.299 [2024-11-20 08:32:46.680235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.299 [2024-11-20 08:32:46.680246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.299 [2024-11-20 08:32:46.680257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:51680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.299 [2024-11-20 08:32:46.680267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.299 [2024-11-20 08:32:46.680279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.299 [2024-11-20 08:32:46.680289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.299 [2024-11-20 08:32:46.680301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.299 [2024-11-20 08:32:46.680312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.299 [2024-11-20 08:32:46.680323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.299 [2024-11-20 08:32:46.680334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.299 [2024-11-20 08:32:46.680345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:68328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.299 [2024-11-20 08:32:46.680356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.299 [2024-11-20 08:32:46.680367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.299 [2024-11-20 08:32:46.680377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.299 [2024-11-20 08:32:46.680389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a2090 is same with the state(6) to be set 00:19:59.299 [2024-11-20 08:32:46.680404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:59.299 [2024-11-20 08:32:46.680413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:59.299 [2024-11-20 08:32:46.680421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100392 len:8 PRP1 0x0 PRP2 0x0 00:19:59.299 [2024-11-20 08:32:46.680432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.299 [2024-11-20 08:32:46.680767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:59.299 [2024-11-20 08:32:46.681328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2134e50 (9): Bad file descriptor 00:19:59.299 [2024-11-20 08:32:46.681923] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:59.299 [2024-11-20 08:32:46.682104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2134e50 with addr=10.0.0.3, port=4420 00:19:59.299 [2024-11-20 08:32:46.682175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2134e50 is same with the state(6) to be set 00:19:59.299 [2024-11-20 08:32:46.682328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2134e50 (9): Bad file descriptor 00:19:59.299 [2024-11-20 08:32:46.682473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:59.299 [2024-11-20 08:32:46.682654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:59.299 [2024-11-20 08:32:46.682714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:59.299 [2024-11-20 08:32:46.682753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:59.299 [2024-11-20 08:32:46.682908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:59.299 08:32:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82579 00:20:01.175 8255.00 IOPS, 32.25 MiB/s [2024-11-20T08:32:48.736Z] 5503.33 IOPS, 21.50 MiB/s [2024-11-20T08:32:48.736Z] [2024-11-20 08:32:48.683185] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:01.175 [2024-11-20 08:32:48.683466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2134e50 with addr=10.0.0.3, port=4420 00:20:01.175 [2024-11-20 08:32:48.683662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2134e50 is same with the state(6) to be set 00:20:01.175 [2024-11-20 08:32:48.683886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2134e50 (9): Bad file descriptor 00:20:01.175 [2024-11-20 08:32:48.684068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:01.175 [2024-11-20 08:32:48.684222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:01.175 [2024-11-20 08:32:48.684341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:01.175 [2024-11-20 08:32:48.684468] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:01.175 [2024-11-20 08:32:48.684624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:03.047 4127.50 IOPS, 16.12 MiB/s [2024-11-20T08:32:50.866Z] 3302.00 IOPS, 12.90 MiB/s [2024-11-20T08:32:50.866Z] [2024-11-20 08:32:50.684921] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:03.305 [2024-11-20 08:32:50.684991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2134e50 with addr=10.0.0.3, port=4420 00:20:03.305 [2024-11-20 08:32:50.685008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2134e50 is same with the state(6) to be set 00:20:03.305 [2024-11-20 08:32:50.685033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2134e50 (9): Bad file descriptor 00:20:03.305 [2024-11-20 08:32:50.685053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:03.305 [2024-11-20 08:32:50.685064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:03.305 [2024-11-20 08:32:50.685075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:03.305 [2024-11-20 08:32:50.685087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:03.305 [2024-11-20 08:32:50.685099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:05.175 2751.67 IOPS, 10.75 MiB/s [2024-11-20T08:32:52.736Z] 2358.57 IOPS, 9.21 MiB/s [2024-11-20T08:32:52.736Z] [2024-11-20 08:32:52.685176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:05.175 [2024-11-20 08:32:52.685232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:05.175 [2024-11-20 08:32:52.685246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:05.175 [2024-11-20 08:32:52.685257] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:20:05.175 [2024-11-20 08:32:52.685271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:06.369 2063.75 IOPS, 8.06 MiB/s 00:20:06.369 Latency(us) 00:20:06.369 [2024-11-20T08:32:53.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.369 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:06.369 NVMe0n1 : 8.12 2032.84 7.94 15.76 0.00 62423.76 8281.37 7015926.69 00:20:06.369 [2024-11-20T08:32:53.930Z] =================================================================================================================== 00:20:06.369 [2024-11-20T08:32:53.930Z] Total : 2032.84 7.94 15.76 0.00 62423.76 8281.37 7015926.69 00:20:06.369 { 00:20:06.369 "results": [ 00:20:06.369 { 00:20:06.369 "job": "NVMe0n1", 00:20:06.369 "core_mask": "0x4", 00:20:06.369 "workload": "randread", 00:20:06.369 "status": "finished", 00:20:06.369 "queue_depth": 128, 00:20:06.369 "io_size": 4096, 00:20:06.369 "runtime": 8.121647, 00:20:06.369 "iops": 2032.8389057047173, 00:20:06.369 "mibps": 7.940776975409052, 00:20:06.369 "io_failed": 128, 00:20:06.369 "io_timeout": 0, 00:20:06.369 "avg_latency_us": 62423.76154126916, 00:20:06.369 "min_latency_us": 8281.367272727273, 00:20:06.369 "max_latency_us": 7015926.69090909 00:20:06.369 } 00:20:06.369 ], 00:20:06.369 "core_count": 1 00:20:06.369 } 00:20:06.369 08:32:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:06.369 Attaching 5 probes... 00:20:06.369 1377.582230: reset bdev controller NVMe0 00:20:06.369 1378.664851: reconnect bdev controller NVMe0 00:20:06.369 3379.849229: reconnect delay bdev controller NVMe0 00:20:06.369 3379.876725: reconnect bdev controller NVMe0 00:20:06.369 5381.620742: reconnect delay bdev controller NVMe0 00:20:06.369 5381.648015: reconnect bdev controller NVMe0 00:20:06.369 7381.973422: reconnect delay bdev controller NVMe0 00:20:06.369 7381.997355: reconnect bdev controller NVMe0 00:20:06.369 08:32:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:06.369 08:32:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:06.369 08:32:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82537 00:20:06.369 08:32:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:06.369 08:32:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82521 00:20:06.369 08:32:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' -z 82521 ']' 00:20:06.369 08:32:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@961 -- # kill -0 82521 00:20:06.369 08:32:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # uname 00:20:06.369 08:32:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:20:06.369 08:32:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 82521 00:20:06.369 08:32:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@963 -- # process_name=reactor_2 00:20:06.369 08:32:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # '[' reactor_2 = sudo ']' 00:20:06.369 killing process with pid 82521 00:20:06.369 08:32:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@975 -- # echo 'killing process with pid 82521' 00:20:06.369 Received shutdown signal, test time was about 8.201663 seconds 00:20:06.369 00:20:06.369 Latency(us) 00:20:06.369 [2024-11-20T08:32:53.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.369 [2024-11-20T08:32:53.930Z] =================================================================================================================== 00:20:06.369 [2024-11-20T08:32:53.930Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:06.369 08:32:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # kill 82521 00:20:06.369 08:32:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@981 -- # wait 82521 00:20:06.628 08:32:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:06.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:06.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:20:06.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:06.886 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:20:07.145 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:07.145 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:20:07.145 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:07.145 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:07.145 rmmod nvme_tcp 00:20:07.145 rmmod nvme_fabrics 00:20:07.145 rmmod nvme_keyring 00:20:07.145 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:07.145 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:20:07.145 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:20:07.145 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 82110 ']' 00:20:07.145 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 82110 00:20:07.145 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' -z 82110 ']' 00:20:07.145 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@961 -- # kill -0 82110 00:20:07.145 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # uname 00:20:07.145 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:20:07.145 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 82110 00:20:07.145 killing process with pid 82110 00:20:07.145 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:20:07.145 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:20:07.145 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@975 -- # echo 'killing process with pid 82110' 00:20:07.145 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # kill 82110 00:20:07.145 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@981 -- # wait 82110 00:20:07.404 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:07.404 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:07.404 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:07.404 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:20:07.404 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:07.404 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:20:07.404 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:20:07.404 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:07.404 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:07.404 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:07.404 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:07.404 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:07.404 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:07.404 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:07.404 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:07.404 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:07.404 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:07.404 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:07.404 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:07.404 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:07.404 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:07.662 08:32:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:07.662 08:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:07.662 08:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.663 08:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:07.663 08:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.663 08:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:20:07.663 ************************************ 00:20:07.663 END TEST nvmf_timeout 00:20:07.663 ************************************ 00:20:07.663 00:20:07.663 real 0m46.243s 00:20:07.663 user 2m15.690s 00:20:07.663 sys 0m5.545s 00:20:07.663 08:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1133 -- # xtrace_disable 00:20:07.663 08:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:07.663 08:32:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:20:07.663 08:32:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:07.663 ************************************ 00:20:07.663 END TEST nvmf_host 00:20:07.663 ************************************ 00:20:07.663 00:20:07.663 real 5m10.352s 00:20:07.663 user 13m30.954s 00:20:07.663 sys 1m10.401s 00:20:07.663 08:32:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1133 -- # xtrace_disable 00:20:07.663 08:32:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.663 08:32:55 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:20:07.663 08:32:55 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:20:07.663 ************************************ 00:20:07.663 END TEST nvmf_tcp 00:20:07.663 ************************************ 00:20:07.663 00:20:07.663 real 13m0.386s 00:20:07.663 user 31m15.848s 00:20:07.663 sys 3m15.905s 00:20:07.663 08:32:55 nvmf_tcp -- common/autotest_common.sh@1133 -- # xtrace_disable 00:20:07.663 08:32:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:07.663 08:32:55 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:20:07.663 08:32:55 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:07.663 08:32:55 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:20:07.663 08:32:55 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:20:07.663 08:32:55 -- common/autotest_common.sh@10 -- # set +x 00:20:07.663 ************************************ 00:20:07.663 START TEST nvmf_dif 00:20:07.663 ************************************ 00:20:07.663 08:32:55 nvmf_dif -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:07.922 * Looking for test storage... 00:20:07.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:07.922 08:32:55 nvmf_dif -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:20:07.922 08:32:55 nvmf_dif -- common/autotest_common.sh@1638 -- # lcov --version 00:20:07.922 08:32:55 nvmf_dif -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:20:07.922 08:32:55 nvmf_dif -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:20:07.922 08:32:55 nvmf_dif -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:07.922 08:32:55 nvmf_dif -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:20:07.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.922 --rc genhtml_branch_coverage=1 00:20:07.922 --rc genhtml_function_coverage=1 00:20:07.922 --rc genhtml_legend=1 00:20:07.922 --rc geninfo_all_blocks=1 00:20:07.922 --rc geninfo_unexecuted_blocks=1 00:20:07.922 00:20:07.922 ' 00:20:07.922 08:32:55 nvmf_dif -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:20:07.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.922 --rc genhtml_branch_coverage=1 00:20:07.922 --rc genhtml_function_coverage=1 00:20:07.922 --rc genhtml_legend=1 00:20:07.922 --rc geninfo_all_blocks=1 00:20:07.922 --rc geninfo_unexecuted_blocks=1 00:20:07.922 00:20:07.922 ' 00:20:07.922 08:32:55 nvmf_dif -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:20:07.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.922 --rc genhtml_branch_coverage=1 00:20:07.922 --rc genhtml_function_coverage=1 00:20:07.922 --rc genhtml_legend=1 00:20:07.922 --rc geninfo_all_blocks=1 00:20:07.922 --rc geninfo_unexecuted_blocks=1 00:20:07.922 00:20:07.922 ' 00:20:07.922 08:32:55 nvmf_dif -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:20:07.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.922 --rc genhtml_branch_coverage=1 00:20:07.922 --rc genhtml_function_coverage=1 00:20:07.922 --rc genhtml_legend=1 00:20:07.922 --rc geninfo_all_blocks=1 00:20:07.922 --rc geninfo_unexecuted_blocks=1 00:20:07.922 00:20:07.922 ' 00:20:07.922 08:32:55 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.922 08:32:55 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.922 08:32:55 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.922 08:32:55 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.922 08:32:55 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.922 08:32:55 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:07.922 08:32:55 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.922 08:32:55 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:07.923 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:07.923 08:32:55 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:07.923 08:32:55 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:07.923 08:32:55 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:07.923 08:32:55 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:07.923 08:32:55 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.923 08:32:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:07.923 08:32:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:07.923 Cannot find device "nvmf_init_br" 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:07.923 08:32:55 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:08.191 Cannot find device "nvmf_init_br2" 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:08.191 Cannot find device "nvmf_tgt_br" 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@164 -- # true 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:08.191 Cannot find device "nvmf_tgt_br2" 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@165 -- # true 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:08.191 Cannot find device "nvmf_init_br" 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@166 -- # true 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:08.191 Cannot find device "nvmf_init_br2" 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@167 -- # true 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:08.191 Cannot find device "nvmf_tgt_br" 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@168 -- # true 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:08.191 Cannot find device "nvmf_tgt_br2" 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@169 -- # true 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:08.191 Cannot find device "nvmf_br" 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@170 -- # true 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:08.191 Cannot find device "nvmf_init_if" 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@171 -- # true 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:08.191 Cannot find device "nvmf_init_if2" 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@172 -- # true 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:08.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@173 -- # true 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:08.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@174 -- # true 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:08.191 08:32:55 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:08.462 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:08.462 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:20:08.462 00:20:08.462 --- 10.0.0.3 ping statistics --- 00:20:08.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.462 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:08.462 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:08.462 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:20:08.462 00:20:08.462 --- 10.0.0.4 ping statistics --- 00:20:08.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.462 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:08.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:20:08.462 00:20:08.462 --- 10.0.0.1 ping statistics --- 00:20:08.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.462 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:08.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:20:08.462 00:20:08.462 --- 10.0.0.2 ping statistics --- 00:20:08.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.462 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:20:08.462 08:32:55 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:08.720 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:08.720 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:08.720 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:08.979 08:32:56 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.979 08:32:56 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:08.979 08:32:56 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:08.979 08:32:56 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.979 08:32:56 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:08.979 08:32:56 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:08.979 08:32:56 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:08.979 08:32:56 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:08.979 08:32:56 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:08.979 08:32:56 nvmf_dif -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:08.979 08:32:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:08.979 08:32:56 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=83082 00:20:08.979 08:32:56 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:08.979 08:32:56 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 83082 00:20:08.979 08:32:56 nvmf_dif -- common/autotest_common.sh@838 -- # '[' -z 83082 ']' 00:20:08.979 08:32:56 nvmf_dif -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.979 08:32:56 nvmf_dif -- common/autotest_common.sh@843 -- # local max_retries=100 00:20:08.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.979 08:32:56 nvmf_dif -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.979 08:32:56 nvmf_dif -- common/autotest_common.sh@847 -- # xtrace_disable 00:20:08.979 08:32:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:08.979 [2024-11-20 08:32:56.416736] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:20:08.979 [2024-11-20 08:32:56.416860] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.238 [2024-11-20 08:32:56.570338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.238 [2024-11-20 08:32:56.643237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.238 [2024-11-20 08:32:56.643297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.238 [2024-11-20 08:32:56.643325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.238 [2024-11-20 08:32:56.643335] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.238 [2024-11-20 08:32:56.643344] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.238 [2024-11-20 08:32:56.643830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.238 [2024-11-20 08:32:56.705880] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:09.238 08:32:56 nvmf_dif -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:20:09.238 08:32:56 nvmf_dif -- common/autotest_common.sh@871 -- # return 0 00:20:09.238 08:32:56 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:09.238 08:32:56 nvmf_dif -- common/autotest_common.sh@735 -- # xtrace_disable 00:20:09.238 08:32:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:09.497 08:32:56 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.497 08:32:56 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:09.497 08:32:56 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:09.497 08:32:56 nvmf_dif -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:09.497 08:32:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:09.497 [2024-11-20 08:32:56.829254] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.497 08:32:56 nvmf_dif -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:09.497 08:32:56 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:09.497 08:32:56 nvmf_dif -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:20:09.497 08:32:56 nvmf_dif -- common/autotest_common.sh@1114 -- # xtrace_disable 00:20:09.497 08:32:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:09.497 ************************************ 00:20:09.497 START TEST fio_dif_1_default 00:20:09.497 ************************************ 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1132 -- # fio_dif_1 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:09.497 bdev_null0 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:09.497 [2024-11-20 08:32:56.877397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1329 -- # local fio_dir=/usr/src/fio 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:09.497 { 00:20:09.497 "params": { 00:20:09.497 "name": "Nvme$subsystem", 00:20:09.497 "trtype": "$TEST_TRANSPORT", 00:20:09.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.497 "adrfam": "ipv4", 00:20:09.497 "trsvcid": "$NVMF_PORT", 00:20:09.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.497 "hdgst": ${hdgst:-false}, 00:20:09.497 "ddgst": ${ddgst:-false} 00:20:09.497 }, 00:20:09.497 "method": "bdev_nvme_attach_controller" 00:20:09.497 } 00:20:09.497 EOF 00:20:09.497 )") 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1331 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1331 -- # local sanitizers 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1332 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # shift 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local asan_lib= 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # grep libasan 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:09.497 "params": { 00:20:09.497 "name": "Nvme0", 00:20:09.497 "trtype": "tcp", 00:20:09.497 "traddr": "10.0.0.3", 00:20:09.497 "adrfam": "ipv4", 00:20:09.497 "trsvcid": "4420", 00:20:09.497 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:09.497 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:09.497 "hdgst": false, 00:20:09.497 "ddgst": false 00:20:09.497 }, 00:20:09.497 "method": "bdev_nvme_attach_controller" 00:20:09.497 }' 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # asan_lib= 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # [[ -n '' ]] 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # grep libclang_rt.asan 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # asan_lib= 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # [[ -n '' ]] 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:09.497 08:32:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:09.755 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:09.755 fio-3.35 00:20:09.755 Starting 1 thread 00:20:22.024 00:20:22.024 filename0: (groupid=0, jobs=1): err= 0: pid=83141: Wed Nov 20 08:33:07 2024 00:20:22.024 read: IOPS=8227, BW=32.1MiB/s (33.7MB/s)(321MiB/10001msec) 00:20:22.024 slat (usec): min=6, max=711, avg= 8.92, stdev= 4.25 00:20:22.024 clat (usec): min=357, max=2031, avg=459.71, stdev=39.52 00:20:22.024 lat (usec): min=364, max=2044, avg=468.63, stdev=40.38 00:20:22.024 clat percentiles (usec): 00:20:22.024 | 1.00th=[ 396], 5.00th=[ 416], 10.00th=[ 424], 20.00th=[ 433], 00:20:22.024 | 30.00th=[ 441], 40.00th=[ 449], 50.00th=[ 457], 60.00th=[ 461], 00:20:22.024 | 70.00th=[ 474], 80.00th=[ 482], 90.00th=[ 502], 95.00th=[ 519], 00:20:22.024 | 99.00th=[ 562], 99.50th=[ 594], 99.90th=[ 652], 99.95th=[ 717], 00:20:22.024 | 99.99th=[ 1860] 00:20:22.024 bw ( KiB/s): min=31075, max=33600, per=99.99%, avg=32907.95, stdev=633.64, samples=19 00:20:22.024 iops : min= 7768, max= 8400, avg=8226.95, stdev=158.53, samples=19 00:20:22.024 lat (usec) : 500=89.96%, 750=9.99%, 1000=0.01% 00:20:22.024 lat (msec) : 2=0.03%, 4=0.01% 00:20:22.024 cpu : usr=82.20%, sys=15.74%, ctx=31, majf=0, minf=9 00:20:22.024 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:22.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.024 issued rwts: total=82288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.024 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:22.024 00:20:22.024 Run status group 0 (all jobs): 00:20:22.024 READ: bw=32.1MiB/s (33.7MB/s), 32.1MiB/s-32.1MiB/s (33.7MB/s-33.7MB/s), io=321MiB (337MB), run=10001-10001msec 00:20:22.024 08:33:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:22.024 08:33:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:22.024 08:33:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:22.024 08:33:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:22.024 08:33:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:22.024 08:33:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:22.024 08:33:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:22.024 08:33:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:22.024 08:33:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:22.024 08:33:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:22.024 08:33:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:22.024 08:33:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:22.024 ************************************ 00:20:22.024 END TEST fio_dif_1_default 00:20:22.024 ************************************ 00:20:22.024 08:33:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:22.024 00:20:22.024 real 0m11.084s 00:20:22.024 user 0m8.913s 00:20:22.024 sys 0m1.868s 00:20:22.024 08:33:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1133 -- # xtrace_disable 00:20:22.024 08:33:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:22.024 08:33:07 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:22.024 08:33:07 nvmf_dif -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:20:22.024 08:33:07 nvmf_dif -- common/autotest_common.sh@1114 -- # xtrace_disable 00:20:22.025 08:33:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:22.025 ************************************ 00:20:22.025 START TEST fio_dif_1_multi_subsystems 00:20:22.025 ************************************ 00:20:22.025 08:33:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1132 -- # fio_dif_1_multi_subsystems 00:20:22.025 08:33:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:22.025 08:33:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:22.025 08:33:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:22.025 08:33:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:22.025 08:33:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:22.025 08:33:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:22.025 08:33:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:22.025 08:33:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:22.025 08:33:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:22.025 bdev_null0 00:20:22.025 08:33:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:22.025 08:33:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:22.025 08:33:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:22.025 08:33:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:22.025 [2024-11-20 08:33:08.016804] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:22.025 bdev_null1 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1329 -- # local fio_dir=/usr/src/fio 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1331 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1331 -- # local sanitizers 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:22.025 { 00:20:22.025 "params": { 00:20:22.025 "name": "Nvme$subsystem", 00:20:22.025 "trtype": "$TEST_TRANSPORT", 00:20:22.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.025 "adrfam": "ipv4", 00:20:22.025 "trsvcid": "$NVMF_PORT", 00:20:22.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.025 "hdgst": ${hdgst:-false}, 00:20:22.025 "ddgst": ${ddgst:-false} 00:20:22.025 }, 00:20:22.025 "method": "bdev_nvme_attach_controller" 00:20:22.025 } 00:20:22.025 EOF 00:20:22.025 )") 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1332 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # shift 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local asan_lib= 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # grep libasan 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:22.025 { 00:20:22.025 "params": { 00:20:22.025 "name": "Nvme$subsystem", 00:20:22.025 "trtype": "$TEST_TRANSPORT", 00:20:22.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.025 "adrfam": "ipv4", 00:20:22.025 "trsvcid": "$NVMF_PORT", 00:20:22.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.025 "hdgst": ${hdgst:-false}, 00:20:22.025 "ddgst": ${ddgst:-false} 00:20:22.025 }, 00:20:22.025 "method": "bdev_nvme_attach_controller" 00:20:22.025 } 00:20:22.025 EOF 00:20:22.025 )") 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:20:22.025 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:22.025 "params": { 00:20:22.025 "name": "Nvme0", 00:20:22.025 "trtype": "tcp", 00:20:22.025 "traddr": "10.0.0.3", 00:20:22.025 "adrfam": "ipv4", 00:20:22.025 "trsvcid": "4420", 00:20:22.025 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:22.025 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:22.025 "hdgst": false, 00:20:22.025 "ddgst": false 00:20:22.025 }, 00:20:22.025 "method": "bdev_nvme_attach_controller" 00:20:22.025 },{ 00:20:22.025 "params": { 00:20:22.025 "name": "Nvme1", 00:20:22.025 "trtype": "tcp", 00:20:22.025 "traddr": "10.0.0.3", 00:20:22.025 "adrfam": "ipv4", 00:20:22.025 "trsvcid": "4420", 00:20:22.025 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.025 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:22.025 "hdgst": false, 00:20:22.025 "ddgst": false 00:20:22.025 }, 00:20:22.025 "method": "bdev_nvme_attach_controller" 00:20:22.026 }' 00:20:22.026 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # asan_lib= 00:20:22.026 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # [[ -n '' ]] 00:20:22.026 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:20:22.026 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:22.026 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # grep libclang_rt.asan 00:20:22.026 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:20:22.026 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # asan_lib= 00:20:22.026 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # [[ -n '' ]] 00:20:22.026 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:22.026 08:33:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:22.026 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:22.026 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:22.026 fio-3.35 00:20:22.026 Starting 2 threads 00:20:31.997 00:20:31.997 filename0: (groupid=0, jobs=1): err= 0: pid=83301: Wed Nov 20 08:33:18 2024 00:20:31.997 read: IOPS=4649, BW=18.2MiB/s (19.0MB/s)(182MiB/10001msec) 00:20:31.997 slat (usec): min=6, max=308, avg=14.75, stdev= 7.34 00:20:31.997 clat (usec): min=429, max=2760, avg=819.98, stdev=70.00 00:20:31.997 lat (usec): min=436, max=2796, avg=834.73, stdev=72.09 00:20:31.997 clat percentiles (usec): 00:20:31.997 | 1.00th=[ 676], 5.00th=[ 717], 10.00th=[ 734], 20.00th=[ 766], 00:20:31.997 | 30.00th=[ 791], 40.00th=[ 799], 50.00th=[ 816], 60.00th=[ 832], 00:20:31.997 | 70.00th=[ 848], 80.00th=[ 873], 90.00th=[ 906], 95.00th=[ 938], 00:20:31.997 | 99.00th=[ 996], 99.50th=[ 1020], 99.90th=[ 1090], 99.95th=[ 1139], 00:20:31.997 | 99.99th=[ 1582] 00:20:31.997 bw ( KiB/s): min=17376, max=19872, per=50.05%, avg=18620.37, stdev=640.68, samples=19 00:20:31.997 iops : min= 4344, max= 4968, avg=4655.05, stdev=160.18, samples=19 00:20:31.997 lat (usec) : 500=0.06%, 750=13.52%, 1000=85.47% 00:20:31.997 lat (msec) : 2=0.94%, 4=0.01% 00:20:31.997 cpu : usr=89.88%, sys=8.46%, ctx=102, majf=0, minf=0 00:20:31.997 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:31.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.997 issued rwts: total=46504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:31.997 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:31.997 filename1: (groupid=0, jobs=1): err= 0: pid=83302: Wed Nov 20 08:33:18 2024 00:20:31.997 read: IOPS=4650, BW=18.2MiB/s (19.0MB/s)(182MiB/10001msec) 00:20:31.997 slat (usec): min=6, max=175, avg=14.79, stdev= 6.72 00:20:31.997 clat (usec): min=418, max=3261, avg=819.26, stdev=65.76 00:20:31.997 lat (usec): min=426, max=3275, avg=834.05, stdev=67.25 00:20:31.997 clat percentiles (usec): 00:20:31.997 | 1.00th=[ 685], 5.00th=[ 734], 10.00th=[ 750], 20.00th=[ 775], 00:20:31.997 | 30.00th=[ 791], 40.00th=[ 799], 50.00th=[ 816], 60.00th=[ 824], 00:20:31.997 | 70.00th=[ 840], 80.00th=[ 865], 90.00th=[ 898], 95.00th=[ 930], 00:20:31.997 | 99.00th=[ 979], 99.50th=[ 1004], 99.90th=[ 1057], 99.95th=[ 1090], 00:20:31.997 | 99.99th=[ 2606] 00:20:31.997 bw ( KiB/s): min=17376, max=19872, per=50.06%, avg=18624.00, stdev=635.36, samples=19 00:20:31.997 iops : min= 4344, max= 4968, avg=4656.00, stdev=158.84, samples=19 00:20:31.997 lat (usec) : 500=0.04%, 750=8.98%, 1000=90.47% 00:20:31.997 lat (msec) : 2=0.49%, 4=0.02% 00:20:31.997 cpu : usr=90.18%, sys=8.37%, ctx=126, majf=0, minf=0 00:20:31.997 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:31.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.997 issued rwts: total=46508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:31.997 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:31.997 00:20:31.997 Run status group 0 (all jobs): 00:20:31.997 READ: bw=36.3MiB/s (38.1MB/s), 18.2MiB/s-18.2MiB/s (19.0MB/s-19.0MB/s), io=363MiB (381MB), run=10001-10001msec 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:31.997 ************************************ 00:20:31.997 END TEST fio_dif_1_multi_subsystems 00:20:31.997 ************************************ 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:31.997 00:20:31.997 real 0m11.222s 00:20:31.997 user 0m18.813s 00:20:31.997 sys 0m2.008s 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1133 -- # xtrace_disable 00:20:31.997 08:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:31.998 08:33:19 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:31.998 08:33:19 nvmf_dif -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:20:31.998 08:33:19 nvmf_dif -- common/autotest_common.sh@1114 -- # xtrace_disable 00:20:31.998 08:33:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:31.998 ************************************ 00:20:31.998 START TEST fio_dif_rand_params 00:20:31.998 ************************************ 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1132 -- # fio_dif_rand_params 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.998 bdev_null0 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:31.998 [2024-11-20 08:33:19.296884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.998 { 00:20:31.998 "params": { 00:20:31.998 "name": "Nvme$subsystem", 00:20:31.998 "trtype": "$TEST_TRANSPORT", 00:20:31.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.998 "adrfam": "ipv4", 00:20:31.998 "trsvcid": "$NVMF_PORT", 00:20:31.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.998 "hdgst": ${hdgst:-false}, 00:20:31.998 "ddgst": ${ddgst:-false} 00:20:31.998 }, 00:20:31.998 "method": "bdev_nvme_attach_controller" 00:20:31.998 } 00:20:31.998 EOF 00:20:31.998 )") 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1329 -- # local fio_dir=/usr/src/fio 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1331 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1331 -- # local sanitizers 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1332 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # shift 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local asan_lib= 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # grep libasan 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:31.998 "params": { 00:20:31.998 "name": "Nvme0", 00:20:31.998 "trtype": "tcp", 00:20:31.998 "traddr": "10.0.0.3", 00:20:31.998 "adrfam": "ipv4", 00:20:31.998 "trsvcid": "4420", 00:20:31.998 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:31.998 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:31.998 "hdgst": false, 00:20:31.998 "ddgst": false 00:20:31.998 }, 00:20:31.998 "method": "bdev_nvme_attach_controller" 00:20:31.998 }' 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # asan_lib= 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # [[ -n '' ]] 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # grep libclang_rt.asan 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # asan_lib= 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # [[ -n '' ]] 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:31.998 08:33:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:31.998 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:31.998 ... 00:20:31.998 fio-3.35 00:20:31.998 Starting 3 threads 00:20:38.566 00:20:38.566 filename0: (groupid=0, jobs=1): err= 0: pid=83458: Wed Nov 20 08:33:25 2024 00:20:38.566 read: IOPS=260, BW=32.6MiB/s (34.2MB/s)(163MiB/5001msec) 00:20:38.567 slat (nsec): min=7378, max=30278, avg=9785.89, stdev=2894.46 00:20:38.567 clat (usec): min=9941, max=13130, avg=11471.35, stdev=204.61 00:20:38.567 lat (usec): min=9949, max=13155, avg=11481.14, stdev=205.06 00:20:38.567 clat percentiles (usec): 00:20:38.567 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11338], 00:20:38.567 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11469], 60.00th=[11469], 00:20:38.567 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11600], 95.00th=[11863], 00:20:38.567 | 99.00th=[12256], 99.50th=[12387], 99.90th=[13173], 99.95th=[13173], 00:20:38.567 | 99.99th=[13173] 00:20:38.567 bw ( KiB/s): min=33024, max=33792, per=33.30%, avg=33365.33, stdev=404.77, samples=9 00:20:38.567 iops : min= 258, max= 264, avg=260.67, stdev= 3.16, samples=9 00:20:38.567 lat (msec) : 10=0.23%, 20=99.77% 00:20:38.567 cpu : usr=91.30%, sys=8.18%, ctx=3, majf=0, minf=0 00:20:38.567 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:38.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:38.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:38.567 issued rwts: total=1305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:38.567 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:38.567 filename0: (groupid=0, jobs=1): err= 0: pid=83459: Wed Nov 20 08:33:25 2024 00:20:38.567 read: IOPS=260, BW=32.6MiB/s (34.2MB/s)(163MiB/5001msec) 00:20:38.567 slat (nsec): min=7387, max=34556, avg=10294.76, stdev=3342.99 00:20:38.567 clat (usec): min=8469, max=14611, avg=11469.92, stdev=269.31 00:20:38.567 lat (usec): min=8477, max=14637, avg=11480.21, stdev=269.61 00:20:38.567 clat percentiles (usec): 00:20:38.567 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11338], 00:20:38.567 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11469], 60.00th=[11469], 00:20:38.567 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11600], 95.00th=[11863], 00:20:38.567 | 99.00th=[12256], 99.50th=[12256], 99.90th=[14615], 99.95th=[14615], 00:20:38.567 | 99.99th=[14615] 00:20:38.567 bw ( KiB/s): min=33024, max=33792, per=33.30%, avg=33365.33, stdev=404.77, samples=9 00:20:38.567 iops : min= 258, max= 264, avg=260.67, stdev= 3.16, samples=9 00:20:38.567 lat (msec) : 10=0.23%, 20=99.77% 00:20:38.567 cpu : usr=89.78%, sys=9.66%, ctx=8, majf=0, minf=0 00:20:38.567 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:38.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:38.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:38.567 issued rwts: total=1305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:38.567 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:38.567 filename0: (groupid=0, jobs=1): err= 0: pid=83460: Wed Nov 20 08:33:25 2024 00:20:38.567 read: IOPS=261, BW=32.7MiB/s (34.2MB/s)(164MiB/5006msec) 00:20:38.567 slat (nsec): min=7287, max=31827, avg=10235.65, stdev=3403.99 00:20:38.567 clat (usec): min=5323, max=12604, avg=11455.61, stdev=341.95 00:20:38.567 lat (usec): min=5331, max=12616, avg=11465.84, stdev=341.80 00:20:38.567 clat percentiles (usec): 00:20:38.567 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11338], 00:20:38.567 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11469], 60.00th=[11469], 00:20:38.567 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11600], 95.00th=[11863], 00:20:38.567 | 99.00th=[12256], 99.50th=[12256], 99.90th=[12649], 99.95th=[12649], 00:20:38.567 | 99.99th=[12649] 00:20:38.567 bw ( KiB/s): min=32256, max=33792, per=33.35%, avg=33408.00, stdev=543.06, samples=10 00:20:38.567 iops : min= 252, max= 264, avg=261.00, stdev= 4.24, samples=10 00:20:38.567 lat (msec) : 10=0.23%, 20=99.77% 00:20:38.567 cpu : usr=91.13%, sys=8.27%, ctx=10, majf=0, minf=0 00:20:38.567 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:38.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:38.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:38.567 issued rwts: total=1308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:38.567 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:38.567 00:20:38.567 Run status group 0 (all jobs): 00:20:38.567 READ: bw=97.8MiB/s (103MB/s), 32.6MiB/s-32.7MiB/s (34.2MB/s-34.2MB/s), io=490MiB (514MB), run=5001-5006msec 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:38.567 bdev_null0 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:38.567 [2024-11-20 08:33:25.405371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:38.567 bdev_null1 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:38.567 bdev_null2 00:20:38.567 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1329 -- # local fio_dir=/usr/src/fio 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1331 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1331 -- # local sanitizers 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:38.568 { 00:20:38.568 "params": { 00:20:38.568 "name": "Nvme$subsystem", 00:20:38.568 "trtype": "$TEST_TRANSPORT", 00:20:38.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.568 "adrfam": "ipv4", 00:20:38.568 "trsvcid": "$NVMF_PORT", 00:20:38.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.568 "hdgst": ${hdgst:-false}, 00:20:38.568 "ddgst": ${ddgst:-false} 00:20:38.568 }, 00:20:38.568 "method": "bdev_nvme_attach_controller" 00:20:38.568 } 00:20:38.568 EOF 00:20:38.568 )") 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1332 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # shift 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local asan_lib= 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # grep libasan 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:38.568 { 00:20:38.568 "params": { 00:20:38.568 "name": "Nvme$subsystem", 00:20:38.568 "trtype": "$TEST_TRANSPORT", 00:20:38.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.568 "adrfam": "ipv4", 00:20:38.568 "trsvcid": "$NVMF_PORT", 00:20:38.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.568 "hdgst": ${hdgst:-false}, 00:20:38.568 "ddgst": ${ddgst:-false} 00:20:38.568 }, 00:20:38.568 "method": "bdev_nvme_attach_controller" 00:20:38.568 } 00:20:38.568 EOF 00:20:38.568 )") 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:38.568 { 00:20:38.568 "params": { 00:20:38.568 "name": "Nvme$subsystem", 00:20:38.568 "trtype": "$TEST_TRANSPORT", 00:20:38.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.568 "adrfam": "ipv4", 00:20:38.568 "trsvcid": "$NVMF_PORT", 00:20:38.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.568 "hdgst": ${hdgst:-false}, 00:20:38.568 "ddgst": ${ddgst:-false} 00:20:38.568 }, 00:20:38.568 "method": "bdev_nvme_attach_controller" 00:20:38.568 } 00:20:38.568 EOF 00:20:38.568 )") 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:38.568 "params": { 00:20:38.568 "name": "Nvme0", 00:20:38.568 "trtype": "tcp", 00:20:38.568 "traddr": "10.0.0.3", 00:20:38.568 "adrfam": "ipv4", 00:20:38.568 "trsvcid": "4420", 00:20:38.568 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:38.568 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:38.568 "hdgst": false, 00:20:38.568 "ddgst": false 00:20:38.568 }, 00:20:38.568 "method": "bdev_nvme_attach_controller" 00:20:38.568 },{ 00:20:38.568 "params": { 00:20:38.568 "name": "Nvme1", 00:20:38.568 "trtype": "tcp", 00:20:38.568 "traddr": "10.0.0.3", 00:20:38.568 "adrfam": "ipv4", 00:20:38.568 "trsvcid": "4420", 00:20:38.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.568 "hdgst": false, 00:20:38.568 "ddgst": false 00:20:38.568 }, 00:20:38.568 "method": "bdev_nvme_attach_controller" 00:20:38.568 },{ 00:20:38.568 "params": { 00:20:38.568 "name": "Nvme2", 00:20:38.568 "trtype": "tcp", 00:20:38.568 "traddr": "10.0.0.3", 00:20:38.568 "adrfam": "ipv4", 00:20:38.568 "trsvcid": "4420", 00:20:38.568 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:38.568 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:38.568 "hdgst": false, 00:20:38.568 "ddgst": false 00:20:38.568 }, 00:20:38.568 "method": "bdev_nvme_attach_controller" 00:20:38.568 }' 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # asan_lib= 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # [[ -n '' ]] 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # grep libclang_rt.asan 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # asan_lib= 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # [[ -n '' ]] 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:38.568 08:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:38.568 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:38.568 ... 00:20:38.568 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:38.568 ... 00:20:38.568 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:38.568 ... 00:20:38.568 fio-3.35 00:20:38.568 Starting 24 threads 00:20:50.802 00:20:50.802 filename0: (groupid=0, jobs=1): err= 0: pid=83556: Wed Nov 20 08:33:36 2024 00:20:50.802 read: IOPS=245, BW=982KiB/s (1006kB/s)(9848KiB/10028msec) 00:20:50.802 slat (usec): min=7, max=4976, avg=30.70, stdev=213.00 00:20:50.802 clat (msec): min=21, max=128, avg=64.99, stdev=19.15 00:20:50.802 lat (msec): min=21, max=128, avg=65.02, stdev=19.15 00:20:50.802 clat percentiles (msec): 00:20:50.802 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 48], 00:20:50.802 | 30.00th=[ 53], 40.00th=[ 56], 50.00th=[ 65], 60.00th=[ 72], 00:20:50.802 | 70.00th=[ 77], 80.00th=[ 81], 90.00th=[ 86], 95.00th=[ 99], 00:20:50.802 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 129], 99.95th=[ 129], 00:20:50.802 | 99.99th=[ 129] 00:20:50.803 bw ( KiB/s): min= 712, max= 1600, per=4.28%, avg=978.30, stdev=167.24, samples=20 00:20:50.803 iops : min= 178, max= 400, avg=244.55, stdev=41.81, samples=20 00:20:50.803 lat (msec) : 50=24.74%, 100=70.63%, 250=4.63% 00:20:50.803 cpu : usr=43.17%, sys=1.56%, ctx=1478, majf=0, minf=0 00:20:50.803 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=82.6%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:50.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.803 complete : 0=0.0%, 4=87.0%, 8=12.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.803 issued rwts: total=2462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.803 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.803 filename0: (groupid=0, jobs=1): err= 0: pid=83557: Wed Nov 20 08:33:36 2024 00:20:50.803 read: IOPS=235, BW=942KiB/s (964kB/s)(9452KiB/10038msec) 00:20:50.803 slat (usec): min=4, max=8044, avg=36.24, stdev=328.79 00:20:50.803 clat (msec): min=20, max=132, avg=67.72, stdev=20.37 00:20:50.803 lat (msec): min=20, max=132, avg=67.75, stdev=20.37 00:20:50.803 clat percentiles (msec): 00:20:50.803 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 50], 00:20:50.803 | 30.00th=[ 56], 40.00th=[ 62], 50.00th=[ 72], 60.00th=[ 73], 00:20:50.803 | 70.00th=[ 79], 80.00th=[ 83], 90.00th=[ 90], 95.00th=[ 103], 00:20:50.803 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 133], 99.95th=[ 133], 00:20:50.803 | 99.99th=[ 133] 00:20:50.803 bw ( KiB/s): min= 640, max= 1740, per=4.12%, avg=940.70, stdev=206.11, samples=20 00:20:50.803 iops : min= 160, max= 435, avg=235.15, stdev=51.53, samples=20 00:20:50.803 lat (msec) : 50=23.74%, 100=70.67%, 250=5.59% 00:20:50.803 cpu : usr=36.63%, sys=1.34%, ctx=1124, majf=0, minf=9 00:20:50.803 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.2%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:50.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.803 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.803 issued rwts: total=2363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.803 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.803 filename0: (groupid=0, jobs=1): err= 0: pid=83558: Wed Nov 20 08:33:36 2024 00:20:50.803 read: IOPS=224, BW=898KiB/s (920kB/s)(9040KiB/10062msec) 00:20:50.803 slat (usec): min=3, max=8029, avg=18.40, stdev=168.73 00:20:50.803 clat (msec): min=14, max=155, avg=71.04, stdev=22.11 00:20:50.803 lat (msec): min=14, max=155, avg=71.06, stdev=22.11 00:20:50.803 clat percentiles (msec): 00:20:50.803 | 1.00th=[ 16], 5.00th=[ 32], 10.00th=[ 48], 20.00th=[ 51], 00:20:50.803 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 79], 00:20:50.803 | 70.00th=[ 84], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:20:50.803 | 99.00th=[ 121], 99.50th=[ 127], 99.90th=[ 129], 99.95th=[ 144], 00:20:50.803 | 99.99th=[ 157] 00:20:50.803 bw ( KiB/s): min= 632, max= 1936, per=3.92%, avg=896.80, stdev=256.94, samples=20 00:20:50.803 iops : min= 158, max= 484, avg=224.20, stdev=64.24, samples=20 00:20:50.803 lat (msec) : 20=4.16%, 50=15.53%, 100=73.10%, 250=7.21% 00:20:50.803 cpu : usr=32.05%, sys=1.33%, ctx=874, majf=0, minf=9 00:20:50.803 IO depths : 1=0.1%, 2=1.3%, 4=5.6%, 8=76.8%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:50.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.803 complete : 0=0.0%, 4=89.2%, 8=9.6%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.803 issued rwts: total=2260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.803 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.803 filename0: (groupid=0, jobs=1): err= 0: pid=83559: Wed Nov 20 08:33:36 2024 00:20:50.803 read: IOPS=236, BW=945KiB/s (968kB/s)(9512KiB/10061msec) 00:20:50.803 slat (nsec): min=7209, max=73335, avg=14177.03, stdev=6761.20 00:20:50.803 clat (msec): min=3, max=131, avg=67.48, stdev=23.96 00:20:50.803 lat (msec): min=3, max=131, avg=67.50, stdev=23.96 00:20:50.803 clat percentiles (msec): 00:20:50.803 | 1.00th=[ 4], 5.00th=[ 18], 10.00th=[ 32], 20.00th=[ 49], 00:20:50.803 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 73], 00:20:50.803 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 107], 00:20:50.803 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:20:50.803 | 99.99th=[ 132] 00:20:50.803 bw ( KiB/s): min= 664, max= 2672, per=4.14%, avg=946.85, stdev=412.95, samples=20 00:20:50.803 iops : min= 166, max= 668, avg=236.70, stdev=103.24, samples=20 00:20:50.803 lat (msec) : 4=1.26%, 10=0.67%, 20=4.71%, 50=16.02%, 100=71.61% 00:20:50.803 lat (msec) : 250=5.72% 00:20:50.803 cpu : usr=33.69%, sys=1.44%, ctx=863, majf=0, minf=0 00:20:50.803 IO depths : 1=0.1%, 2=1.2%, 4=4.7%, 8=77.8%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:50.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.803 complete : 0=0.0%, 4=89.0%, 8=10.0%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.803 issued rwts: total=2378,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.803 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.803 filename0: (groupid=0, jobs=1): err= 0: pid=83560: Wed Nov 20 08:33:36 2024 00:20:50.803 read: IOPS=242, BW=970KiB/s (994kB/s)(9708KiB/10004msec) 00:20:50.803 slat (usec): min=4, max=8052, avg=38.20, stdev=398.25 00:20:50.803 clat (msec): min=3, max=131, avg=65.76, stdev=19.74 00:20:50.803 lat (msec): min=3, max=131, avg=65.80, stdev=19.73 00:20:50.803 clat percentiles (msec): 00:20:50.803 | 1.00th=[ 29], 5.00th=[ 32], 10.00th=[ 47], 20.00th=[ 48], 00:20:50.803 | 30.00th=[ 51], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:20:50.803 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 85], 95.00th=[ 96], 00:20:50.803 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:20:50.803 | 99.99th=[ 132] 00:20:50.803 bw ( KiB/s): min= 816, max= 1426, per=4.24%, avg=968.95, stdev=126.19, samples=19 00:20:50.803 iops : min= 204, max= 356, avg=242.21, stdev=31.45, samples=19 00:20:50.803 lat (msec) : 4=0.29%, 20=0.62%, 50=28.14%, 100=66.58%, 250=4.37% 00:20:50.803 cpu : usr=32.69%, sys=1.89%, ctx=861, majf=0, minf=9 00:20:50.803 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=82.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:50.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.803 complete : 0=0.0%, 4=87.3%, 8=12.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.803 issued rwts: total=2427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.803 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.803 filename0: (groupid=0, jobs=1): err= 0: pid=83561: Wed Nov 20 08:33:36 2024 00:20:50.803 read: IOPS=233, BW=933KiB/s (956kB/s)(9332KiB/10001msec) 00:20:50.803 slat (usec): min=3, max=4046, avg=24.08, stdev=166.59 00:20:50.803 clat (usec): min=848, max=131886, avg=68448.99, stdev=21046.12 00:20:50.803 lat (usec): min=856, max=131913, avg=68473.07, stdev=21047.70 00:20:50.803 clat percentiles (msec): 00:20:50.803 | 1.00th=[ 3], 5.00th=[ 32], 10.00th=[ 46], 20.00th=[ 51], 00:20:50.803 | 30.00th=[ 57], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:20:50.803 | 70.00th=[ 80], 80.00th=[ 82], 90.00th=[ 91], 95.00th=[ 105], 00:20:50.803 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:20:50.803 | 99.99th=[ 132] 00:20:50.803 bw ( KiB/s): min= 768, max= 1408, per=4.03%, avg=920.84, stdev=139.15, samples=19 00:20:50.803 iops : min= 192, max= 352, avg=230.21, stdev=34.79, samples=19 00:20:50.803 lat (usec) : 1000=0.26% 00:20:50.803 lat (msec) : 2=0.51%, 4=0.56%, 20=0.56%, 50=17.40%, 100=75.01% 00:20:50.803 lat (msec) : 250=5.70% 00:20:50.803 cpu : usr=41.83%, sys=1.72%, ctx=1270, majf=0, minf=9 00:20:50.803 IO depths : 1=0.1%, 2=2.1%, 4=8.4%, 8=74.8%, 16=14.7%, 32=0.0%, >=64=0.0% 00:20:50.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.803 complete : 0=0.0%, 4=89.2%, 8=9.0%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.803 issued rwts: total=2333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.803 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.803 filename0: (groupid=0, jobs=1): err= 0: pid=83562: Wed Nov 20 08:33:36 2024 00:20:50.803 read: IOPS=237, BW=948KiB/s (971kB/s)(9536KiB/10055msec) 00:20:50.803 slat (usec): min=5, max=8029, avg=25.81, stdev=246.29 00:20:50.803 clat (msec): min=14, max=160, avg=67.31, stdev=21.35 00:20:50.803 lat (msec): min=14, max=160, avg=67.34, stdev=21.35 00:20:50.803 clat percentiles (msec): 00:20:50.803 | 1.00th=[ 16], 5.00th=[ 32], 10.00th=[ 39], 20.00th=[ 49], 00:20:50.803 | 30.00th=[ 55], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 74], 00:20:50.803 | 70.00th=[ 80], 80.00th=[ 83], 90.00th=[ 90], 95.00th=[ 105], 00:20:50.803 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 133], 99.95th=[ 133], 00:20:50.803 | 99.99th=[ 161] 00:20:50.803 bw ( KiB/s): min= 608, max= 2019, per=4.14%, avg=946.55, stdev=268.00, samples=20 00:20:50.803 iops : min= 152, max= 504, avg=236.60, stdev=66.84, samples=20 00:20:50.803 lat (msec) : 20=2.01%, 50=20.51%, 100=71.85%, 250=5.62% 00:20:50.803 cpu : usr=41.56%, sys=1.90%, ctx=1338, majf=0, minf=9 00:20:50.803 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.5%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:50.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.803 complete : 0=0.0%, 4=87.9%, 8=11.7%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.803 issued rwts: total=2384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.803 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.803 filename0: (groupid=0, jobs=1): err= 0: pid=83563: Wed Nov 20 08:33:36 2024 00:20:50.803 read: IOPS=244, BW=977KiB/s (1000kB/s)(9776KiB/10011msec) 00:20:50.803 slat (usec): min=3, max=8026, avg=24.15, stdev=181.51 00:20:50.803 clat (msec): min=15, max=123, avg=65.42, stdev=19.57 00:20:50.803 lat (msec): min=15, max=123, avg=65.45, stdev=19.57 00:20:50.803 clat percentiles (msec): 00:20:50.803 | 1.00th=[ 21], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 48], 00:20:50.803 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 70], 60.00th=[ 72], 00:20:50.803 | 70.00th=[ 75], 80.00th=[ 82], 90.00th=[ 85], 95.00th=[ 99], 00:20:50.803 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 124], 99.95th=[ 124], 00:20:50.803 | 99.99th=[ 124] 00:20:50.803 bw ( KiB/s): min= 816, max= 1600, per=4.29%, avg=980.63, stdev=162.91, samples=19 00:20:50.803 iops : min= 204, max= 400, avg=245.16, stdev=40.73, samples=19 00:20:50.803 lat (msec) : 20=0.90%, 50=26.06%, 100=68.45%, 250=4.58% 00:20:50.803 cpu : usr=38.00%, sys=1.71%, ctx=1083, majf=0, minf=9 00:20:50.803 IO depths : 1=0.1%, 2=0.2%, 4=1.1%, 8=82.9%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:50.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.804 complete : 0=0.0%, 4=87.0%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.804 issued rwts: total=2444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.804 filename1: (groupid=0, jobs=1): err= 0: pid=83567: Wed Nov 20 08:33:36 2024 00:20:50.804 read: IOPS=233, BW=932KiB/s (955kB/s)(9376KiB/10056msec) 00:20:50.804 slat (usec): min=4, max=8028, avg=24.05, stdev=234.05 00:20:50.804 clat (msec): min=7, max=133, avg=68.43, stdev=21.44 00:20:50.804 lat (msec): min=7, max=133, avg=68.45, stdev=21.45 00:20:50.804 clat percentiles (msec): 00:20:50.804 | 1.00th=[ 16], 5.00th=[ 32], 10.00th=[ 45], 20.00th=[ 49], 00:20:50.804 | 30.00th=[ 59], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 73], 00:20:50.804 | 70.00th=[ 82], 80.00th=[ 84], 90.00th=[ 93], 95.00th=[ 108], 00:20:50.804 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:20:50.804 | 99.99th=[ 134] 00:20:50.804 bw ( KiB/s): min= 664, max= 2039, per=4.07%, avg=930.35, stdev=270.25, samples=20 00:20:50.804 iops : min= 166, max= 509, avg=232.55, stdev=67.40, samples=20 00:20:50.804 lat (msec) : 10=0.04%, 20=2.99%, 50=20.82%, 100=70.18%, 250=5.97% 00:20:50.804 cpu : usr=31.71%, sys=1.64%, ctx=871, majf=0, minf=9 00:20:50.804 IO depths : 1=0.1%, 2=0.6%, 4=2.7%, 8=80.1%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:50.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.804 complete : 0=0.0%, 4=88.4%, 8=11.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.804 issued rwts: total=2344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.804 filename1: (groupid=0, jobs=1): err= 0: pid=83568: Wed Nov 20 08:33:36 2024 00:20:50.804 read: IOPS=237, BW=949KiB/s (972kB/s)(9548KiB/10059msec) 00:20:50.804 slat (usec): min=5, max=4033, avg=20.35, stdev=119.08 00:20:50.804 clat (msec): min=3, max=147, avg=67.17, stdev=24.62 00:20:50.804 lat (msec): min=3, max=147, avg=67.20, stdev=24.62 00:20:50.804 clat percentiles (msec): 00:20:50.804 | 1.00th=[ 4], 5.00th=[ 18], 10.00th=[ 31], 20.00th=[ 50], 00:20:50.804 | 30.00th=[ 56], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 77], 00:20:50.804 | 70.00th=[ 81], 80.00th=[ 83], 90.00th=[ 94], 95.00th=[ 108], 00:20:50.804 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 138], 99.95th=[ 142], 00:20:50.804 | 99.99th=[ 148] 00:20:50.804 bw ( KiB/s): min= 608, max= 2672, per=4.16%, avg=951.00, stdev=414.43, samples=20 00:20:50.804 iops : min= 152, max= 668, avg=237.75, stdev=103.61, samples=20 00:20:50.804 lat (msec) : 4=1.55%, 10=1.05%, 20=4.02%, 50=15.42%, 100=70.47% 00:20:50.804 lat (msec) : 250=7.50% 00:20:50.804 cpu : usr=43.64%, sys=1.76%, ctx=1575, majf=0, minf=1 00:20:50.804 IO depths : 1=0.1%, 2=1.0%, 4=3.6%, 8=78.8%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:50.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.804 complete : 0=0.0%, 4=88.7%, 8=10.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.804 issued rwts: total=2387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.804 filename1: (groupid=0, jobs=1): err= 0: pid=83569: Wed Nov 20 08:33:36 2024 00:20:50.804 read: IOPS=240, BW=961KiB/s (984kB/s)(9660KiB/10049msec) 00:20:50.804 slat (usec): min=3, max=2178, avg=18.67, stdev=44.93 00:20:50.804 clat (msec): min=13, max=128, avg=66.38, stdev=19.94 00:20:50.804 lat (msec): min=13, max=128, avg=66.40, stdev=19.94 00:20:50.804 clat percentiles (msec): 00:20:50.804 | 1.00th=[ 23], 5.00th=[ 30], 10.00th=[ 45], 20.00th=[ 50], 00:20:50.804 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 74], 00:20:50.804 | 70.00th=[ 79], 80.00th=[ 81], 90.00th=[ 87], 95.00th=[ 101], 00:20:50.804 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 129], 99.95th=[ 129], 00:20:50.804 | 99.99th=[ 129] 00:20:50.804 bw ( KiB/s): min= 616, max= 1904, per=4.20%, avg=959.60, stdev=238.73, samples=20 00:20:50.804 iops : min= 154, max= 476, avg=239.90, stdev=59.68, samples=20 00:20:50.804 lat (msec) : 20=0.08%, 50=20.33%, 100=74.49%, 250=5.09% 00:20:50.804 cpu : usr=43.73%, sys=1.86%, ctx=1560, majf=0, minf=9 00:20:50.804 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=80.0%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:50.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.804 complete : 0=0.0%, 4=87.9%, 8=11.3%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.804 issued rwts: total=2415,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.804 filename1: (groupid=0, jobs=1): err= 0: pid=83570: Wed Nov 20 08:33:36 2024 00:20:50.804 read: IOPS=243, BW=975KiB/s (998kB/s)(9752KiB/10004msec) 00:20:50.804 slat (usec): min=4, max=8045, avg=42.35, stdev=446.02 00:20:50.804 clat (msec): min=3, max=122, avg=65.43, stdev=19.31 00:20:50.804 lat (msec): min=3, max=128, avg=65.47, stdev=19.32 00:20:50.804 clat percentiles (msec): 00:20:50.804 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 48], 00:20:50.804 | 30.00th=[ 51], 40.00th=[ 60], 50.00th=[ 67], 60.00th=[ 72], 00:20:50.804 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 85], 95.00th=[ 96], 00:20:50.804 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 124], 99.95th=[ 124], 00:20:50.804 | 99.99th=[ 124] 00:20:50.804 bw ( KiB/s): min= 824, max= 1384, per=4.26%, avg=973.89, stdev=116.81, samples=19 00:20:50.804 iops : min= 206, max= 346, avg=243.47, stdev=29.20, samples=19 00:20:50.804 lat (msec) : 4=0.25%, 20=0.66%, 50=29.33%, 100=65.63%, 250=4.14% 00:20:50.804 cpu : usr=31.83%, sys=1.40%, ctx=857, majf=0, minf=9 00:20:50.804 IO depths : 1=0.1%, 2=0.3%, 4=1.5%, 8=82.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:50.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.804 complete : 0=0.0%, 4=87.1%, 8=12.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.804 issued rwts: total=2438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.804 filename1: (groupid=0, jobs=1): err= 0: pid=83571: Wed Nov 20 08:33:36 2024 00:20:50.804 read: IOPS=234, BW=939KiB/s (961kB/s)(9412KiB/10025msec) 00:20:50.804 slat (usec): min=3, max=12027, avg=33.73, stdev=378.56 00:20:50.804 clat (msec): min=28, max=128, avg=67.97, stdev=19.15 00:20:50.804 lat (msec): min=28, max=128, avg=68.00, stdev=19.15 00:20:50.804 clat percentiles (msec): 00:20:50.804 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 48], 20.00th=[ 48], 00:20:50.804 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 73], 00:20:50.804 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 87], 95.00th=[ 97], 00:20:50.804 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 129], 99.95th=[ 129], 00:20:50.804 | 99.99th=[ 129] 00:20:50.804 bw ( KiB/s): min= 752, max= 1424, per=4.13%, avg=943.58, stdev=136.98, samples=19 00:20:50.804 iops : min= 188, max= 356, avg=235.89, stdev=34.24, samples=19 00:20:50.804 lat (msec) : 50=25.84%, 100=69.74%, 250=4.42% 00:20:50.804 cpu : usr=32.14%, sys=1.37%, ctx=861, majf=0, minf=9 00:20:50.804 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:50.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.804 complete : 0=0.0%, 4=87.8%, 8=11.5%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.804 issued rwts: total=2353,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.804 filename1: (groupid=0, jobs=1): err= 0: pid=83572: Wed Nov 20 08:33:36 2024 00:20:50.804 read: IOPS=243, BW=974KiB/s (997kB/s)(9788KiB/10049msec) 00:20:50.804 slat (usec): min=3, max=8033, avg=29.28, stdev=293.16 00:20:50.804 clat (msec): min=14, max=134, avg=65.51, stdev=21.24 00:20:50.804 lat (msec): min=14, max=134, avg=65.54, stdev=21.24 00:20:50.804 clat percentiles (msec): 00:20:50.804 | 1.00th=[ 18], 5.00th=[ 28], 10.00th=[ 39], 20.00th=[ 48], 00:20:50.804 | 30.00th=[ 53], 40.00th=[ 60], 50.00th=[ 70], 60.00th=[ 72], 00:20:50.804 | 70.00th=[ 77], 80.00th=[ 83], 90.00th=[ 87], 95.00th=[ 104], 00:20:50.804 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 132], 99.95th=[ 132], 00:20:50.804 | 99.99th=[ 136] 00:20:50.804 bw ( KiB/s): min= 632, max= 2096, per=4.26%, avg=972.40, stdev=281.65, samples=20 00:20:50.804 iops : min= 158, max= 524, avg=243.10, stdev=70.41, samples=20 00:20:50.804 lat (msec) : 20=1.39%, 50=25.42%, 100=67.80%, 250=5.39% 00:20:50.804 cpu : usr=37.53%, sys=1.53%, ctx=1247, majf=0, minf=9 00:20:50.804 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:50.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.804 complete : 0=0.0%, 4=87.2%, 8=12.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.804 issued rwts: total=2447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.804 filename1: (groupid=0, jobs=1): err= 0: pid=83573: Wed Nov 20 08:33:36 2024 00:20:50.804 read: IOPS=241, BW=967KiB/s (991kB/s)(9688KiB/10014msec) 00:20:50.804 slat (usec): min=7, max=8027, avg=34.68, stdev=363.60 00:20:50.804 clat (msec): min=15, max=131, avg=66.02, stdev=19.17 00:20:50.804 lat (msec): min=15, max=131, avg=66.05, stdev=19.20 00:20:50.804 clat percentiles (msec): 00:20:50.804 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 48], 00:20:50.804 | 30.00th=[ 51], 40.00th=[ 60], 50.00th=[ 71], 60.00th=[ 72], 00:20:50.804 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 85], 95.00th=[ 97], 00:20:50.804 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:20:50.804 | 99.99th=[ 132] 00:20:50.804 bw ( KiB/s): min= 816, max= 1296, per=4.25%, avg=970.11, stdev=101.33, samples=19 00:20:50.804 iops : min= 204, max= 324, avg=242.53, stdev=25.33, samples=19 00:20:50.804 lat (msec) : 20=0.70%, 50=28.24%, 100=66.27%, 250=4.79% 00:20:50.804 cpu : usr=31.84%, sys=1.42%, ctx=852, majf=0, minf=9 00:20:50.804 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.9%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:50.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.804 complete : 0=0.0%, 4=87.3%, 8=12.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.804 issued rwts: total=2422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.804 filename1: (groupid=0, jobs=1): err= 0: pid=83574: Wed Nov 20 08:33:36 2024 00:20:50.804 read: IOPS=234, BW=937KiB/s (960kB/s)(9424KiB/10056msec) 00:20:50.804 slat (usec): min=4, max=8094, avg=35.03, stdev=295.69 00:20:50.804 clat (msec): min=12, max=132, avg=68.08, stdev=21.75 00:20:50.804 lat (msec): min=12, max=132, avg=68.12, stdev=21.76 00:20:50.804 clat percentiles (msec): 00:20:50.804 | 1.00th=[ 19], 5.00th=[ 25], 10.00th=[ 36], 20.00th=[ 51], 00:20:50.804 | 30.00th=[ 58], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 00:20:50.804 | 70.00th=[ 80], 80.00th=[ 83], 90.00th=[ 91], 95.00th=[ 108], 00:20:50.805 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 133], 99.95th=[ 133], 00:20:50.805 | 99.99th=[ 133] 00:20:50.805 bw ( KiB/s): min= 616, max= 2071, per=4.09%, avg=935.15, stdev=279.38, samples=20 00:20:50.805 iops : min= 154, max= 517, avg=233.75, stdev=69.68, samples=20 00:20:50.805 lat (msec) : 20=1.19%, 50=18.80%, 100=73.43%, 250=6.58% 00:20:50.805 cpu : usr=37.86%, sys=1.69%, ctx=1151, majf=0, minf=9 00:20:50.805 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.4%, 16=16.6%, 32=0.0%, >=64=0.0% 00:20:50.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.805 complete : 0=0.0%, 4=88.0%, 8=11.6%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.805 issued rwts: total=2356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.805 filename2: (groupid=0, jobs=1): err= 0: pid=83575: Wed Nov 20 08:33:36 2024 00:20:50.805 read: IOPS=233, BW=934KiB/s (956kB/s)(9364KiB/10026msec) 00:20:50.805 slat (usec): min=7, max=12037, avg=31.33, stdev=351.18 00:20:50.805 clat (msec): min=25, max=156, avg=68.30, stdev=19.60 00:20:50.805 lat (msec): min=25, max=156, avg=68.33, stdev=19.60 00:20:50.805 clat percentiles (msec): 00:20:50.805 | 1.00th=[ 30], 5.00th=[ 33], 10.00th=[ 47], 20.00th=[ 49], 00:20:50.805 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:20:50.805 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 87], 95.00th=[ 105], 00:20:50.805 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 129], 99.95th=[ 129], 00:20:50.805 | 99.99th=[ 157] 00:20:50.805 bw ( KiB/s): min= 632, max= 1536, per=4.07%, avg=930.05, stdev=168.26, samples=20 00:20:50.805 iops : min= 158, max= 384, avg=232.50, stdev=42.06, samples=20 00:20:50.805 lat (msec) : 50=24.09%, 100=70.82%, 250=5.08% 00:20:50.805 cpu : usr=32.28%, sys=1.36%, ctx=877, majf=0, minf=9 00:20:50.805 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=80.9%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:50.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.805 complete : 0=0.0%, 4=87.9%, 8=11.6%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.805 issued rwts: total=2341,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.805 filename2: (groupid=0, jobs=1): err= 0: pid=83576: Wed Nov 20 08:33:36 2024 00:20:50.805 read: IOPS=235, BW=940KiB/s (963kB/s)(9452KiB/10053msec) 00:20:50.805 slat (usec): min=5, max=8039, avg=31.55, stdev=273.06 00:20:50.805 clat (msec): min=12, max=155, avg=67.88, stdev=21.51 00:20:50.805 lat (msec): min=12, max=155, avg=67.91, stdev=21.51 00:20:50.805 clat percentiles (msec): 00:20:50.805 | 1.00th=[ 23], 5.00th=[ 27], 10.00th=[ 39], 20.00th=[ 50], 00:20:50.805 | 30.00th=[ 57], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 73], 00:20:50.805 | 70.00th=[ 79], 80.00th=[ 83], 90.00th=[ 91], 95.00th=[ 108], 00:20:50.805 | 99.00th=[ 124], 99.50th=[ 129], 99.90th=[ 153], 99.95th=[ 155], 00:20:50.805 | 99.99th=[ 157] 00:20:50.805 bw ( KiB/s): min= 560, max= 2003, per=4.11%, avg=938.55, stdev=266.61, samples=20 00:20:50.805 iops : min= 140, max= 500, avg=234.60, stdev=66.49, samples=20 00:20:50.805 lat (msec) : 20=0.21%, 50=21.07%, 100=72.28%, 250=6.43% 00:20:50.805 cpu : usr=37.70%, sys=1.31%, ctx=1082, majf=0, minf=9 00:20:50.805 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=81.6%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:50.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.805 complete : 0=0.0%, 4=87.9%, 8=11.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.805 issued rwts: total=2363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.805 filename2: (groupid=0, jobs=1): err= 0: pid=83577: Wed Nov 20 08:33:36 2024 00:20:50.805 read: IOPS=242, BW=970KiB/s (993kB/s)(9740KiB/10044msec) 00:20:50.805 slat (usec): min=3, max=4028, avg=20.06, stdev=81.80 00:20:50.805 clat (msec): min=15, max=125, avg=65.81, stdev=19.81 00:20:50.805 lat (msec): min=15, max=125, avg=65.83, stdev=19.81 00:20:50.805 clat percentiles (msec): 00:20:50.805 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 44], 20.00th=[ 48], 00:20:50.805 | 30.00th=[ 53], 40.00th=[ 58], 50.00th=[ 68], 60.00th=[ 73], 00:20:50.805 | 70.00th=[ 78], 80.00th=[ 81], 90.00th=[ 88], 95.00th=[ 99], 00:20:50.805 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 126], 99.95th=[ 126], 00:20:50.805 | 99.99th=[ 126] 00:20:50.805 bw ( KiB/s): min= 664, max= 1792, per=4.25%, avg=970.00, stdev=208.87, samples=20 00:20:50.805 iops : min= 166, max= 448, avg=242.50, stdev=52.22, samples=20 00:20:50.805 lat (msec) : 20=0.74%, 50=23.94%, 100=70.51%, 250=4.80% 00:20:50.805 cpu : usr=42.87%, sys=1.78%, ctx=1220, majf=0, minf=9 00:20:50.805 IO depths : 1=0.1%, 2=0.7%, 4=2.5%, 8=81.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:50.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.805 complete : 0=0.0%, 4=87.6%, 8=11.9%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.805 issued rwts: total=2435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.805 filename2: (groupid=0, jobs=1): err= 0: pid=83578: Wed Nov 20 08:33:36 2024 00:20:50.805 read: IOPS=236, BW=946KiB/s (968kB/s)(9508KiB/10055msec) 00:20:50.805 slat (usec): min=3, max=6029, avg=29.13, stdev=225.74 00:20:50.805 clat (msec): min=13, max=157, avg=67.49, stdev=21.17 00:20:50.805 lat (msec): min=13, max=157, avg=67.52, stdev=21.17 00:20:50.805 clat percentiles (msec): 00:20:50.805 | 1.00th=[ 20], 5.00th=[ 30], 10.00th=[ 41], 20.00th=[ 50], 00:20:50.805 | 30.00th=[ 55], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 74], 00:20:50.805 | 70.00th=[ 80], 80.00th=[ 83], 90.00th=[ 90], 95.00th=[ 106], 00:20:50.805 | 99.00th=[ 123], 99.50th=[ 126], 99.90th=[ 128], 99.95th=[ 140], 00:20:50.805 | 99.99th=[ 159] 00:20:50.805 bw ( KiB/s): min= 608, max= 1908, per=4.13%, avg=943.80, stdev=243.45, samples=20 00:20:50.805 iops : min= 152, max= 477, avg=235.95, stdev=60.86, samples=20 00:20:50.805 lat (msec) : 20=1.26%, 50=19.56%, 100=73.58%, 250=5.60% 00:20:50.805 cpu : usr=43.80%, sys=1.93%, ctx=1475, majf=0, minf=9 00:20:50.805 IO depths : 1=0.1%, 2=1.1%, 4=4.5%, 8=78.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:50.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.805 complete : 0=0.0%, 4=88.4%, 8=10.6%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.805 issued rwts: total=2377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.805 filename2: (groupid=0, jobs=1): err= 0: pid=83579: Wed Nov 20 08:33:36 2024 00:20:50.805 read: IOPS=242, BW=969KiB/s (993kB/s)(9720KiB/10028msec) 00:20:50.805 slat (usec): min=3, max=8034, avg=30.53, stdev=325.09 00:20:50.805 clat (msec): min=15, max=130, avg=65.85, stdev=19.47 00:20:50.805 lat (msec): min=15, max=130, avg=65.88, stdev=19.47 00:20:50.805 clat percentiles (msec): 00:20:50.805 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 48], 00:20:50.805 | 30.00th=[ 51], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:20:50.805 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 85], 95.00th=[ 97], 00:20:50.805 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 131], 99.95th=[ 131], 00:20:50.805 | 99.99th=[ 131] 00:20:50.805 bw ( KiB/s): min= 712, max= 1672, per=4.23%, avg=965.45, stdev=182.97, samples=20 00:20:50.805 iops : min= 178, max= 418, avg=241.35, stdev=45.74, samples=20 00:20:50.805 lat (msec) : 20=0.08%, 50=30.37%, 100=65.19%, 250=4.36% 00:20:50.805 cpu : usr=31.81%, sys=1.35%, ctx=857, majf=0, minf=9 00:20:50.805 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=82.3%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:50.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.805 complete : 0=0.0%, 4=87.2%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.805 issued rwts: total=2430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.805 filename2: (groupid=0, jobs=1): err= 0: pid=83580: Wed Nov 20 08:33:36 2024 00:20:50.805 read: IOPS=245, BW=980KiB/s (1004kB/s)(9820KiB/10019msec) 00:20:50.805 slat (usec): min=7, max=8054, avg=38.69, stdev=342.27 00:20:50.805 clat (msec): min=14, max=131, avg=65.10, stdev=19.37 00:20:50.805 lat (msec): min=14, max=131, avg=65.14, stdev=19.37 00:20:50.805 clat percentiles (msec): 00:20:50.805 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 48], 00:20:50.805 | 30.00th=[ 53], 40.00th=[ 57], 50.00th=[ 68], 60.00th=[ 72], 00:20:50.805 | 70.00th=[ 75], 80.00th=[ 81], 90.00th=[ 86], 95.00th=[ 96], 00:20:50.805 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 132], 99.95th=[ 132], 00:20:50.805 | 99.99th=[ 132] 00:20:50.805 bw ( KiB/s): min= 816, max= 1672, per=4.33%, avg=988.74, stdev=177.08, samples=19 00:20:50.805 iops : min= 204, max= 418, avg=247.16, stdev=44.27, samples=19 00:20:50.805 lat (msec) : 20=0.12%, 50=26.68%, 100=68.72%, 250=4.48% 00:20:50.805 cpu : usr=40.40%, sys=1.60%, ctx=1259, majf=0, minf=9 00:20:50.805 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:50.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.805 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.805 issued rwts: total=2455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.805 filename2: (groupid=0, jobs=1): err= 0: pid=83581: Wed Nov 20 08:33:36 2024 00:20:50.805 read: IOPS=236, BW=944KiB/s (967kB/s)(9464KiB/10022msec) 00:20:50.805 slat (usec): min=7, max=10035, avg=36.66, stdev=389.05 00:20:50.805 clat (msec): min=20, max=131, avg=67.57, stdev=19.85 00:20:50.805 lat (msec): min=20, max=131, avg=67.61, stdev=19.85 00:20:50.805 clat percentiles (msec): 00:20:50.805 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 49], 00:20:50.805 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 72], 00:20:50.805 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 88], 95.00th=[ 104], 00:20:50.805 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 132], 99.95th=[ 132], 00:20:50.805 | 99.99th=[ 132] 00:20:50.805 bw ( KiB/s): min= 736, max= 1648, per=4.13%, avg=942.40, stdev=183.01, samples=20 00:20:50.805 iops : min= 184, max= 412, avg=235.60, stdev=45.75, samples=20 00:20:50.805 lat (msec) : 50=23.71%, 100=71.01%, 250=5.28% 00:20:50.805 cpu : usr=35.02%, sys=1.38%, ctx=996, majf=0, minf=9 00:20:50.805 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.4%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:50.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.805 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.805 issued rwts: total=2366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.805 filename2: (groupid=0, jobs=1): err= 0: pid=83582: Wed Nov 20 08:33:36 2024 00:20:50.806 read: IOPS=245, BW=981KiB/s (1004kB/s)(9856KiB/10052msec) 00:20:50.806 slat (usec): min=7, max=6462, avg=32.13, stdev=253.50 00:20:50.806 clat (msec): min=15, max=128, avg=65.07, stdev=20.08 00:20:50.806 lat (msec): min=15, max=128, avg=65.10, stdev=20.08 00:20:50.806 clat percentiles (msec): 00:20:50.806 | 1.00th=[ 23], 5.00th=[ 28], 10.00th=[ 41], 20.00th=[ 49], 00:20:50.806 | 30.00th=[ 53], 40.00th=[ 57], 50.00th=[ 68], 60.00th=[ 73], 00:20:50.806 | 70.00th=[ 77], 80.00th=[ 81], 90.00th=[ 86], 95.00th=[ 99], 00:20:50.806 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 129], 99.95th=[ 129], 00:20:50.806 | 99.99th=[ 129] 00:20:50.806 bw ( KiB/s): min= 672, max= 1912, per=4.28%, avg=978.10, stdev=233.39, samples=20 00:20:50.806 iops : min= 168, max= 478, avg=244.50, stdev=58.36, samples=20 00:20:50.806 lat (msec) : 20=0.08%, 50=23.66%, 100=71.67%, 250=4.59% 00:20:50.806 cpu : usr=42.01%, sys=1.88%, ctx=1489, majf=0, minf=9 00:20:50.806 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:50.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.806 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.806 issued rwts: total=2464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:50.806 00:20:50.806 Run status group 0 (all jobs): 00:20:50.806 READ: bw=22.3MiB/s (23.4MB/s), 898KiB/s-982KiB/s (920kB/s-1006kB/s), io=224MiB (235MB), run=10001-10062msec 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:50.806 bdev_null0 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:50.806 [2024-11-20 08:33:36.877484] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:50.806 bdev_null1 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:50.806 { 00:20:50.806 "params": { 00:20:50.806 "name": "Nvme$subsystem", 00:20:50.806 "trtype": "$TEST_TRANSPORT", 00:20:50.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.806 "adrfam": "ipv4", 00:20:50.806 "trsvcid": "$NVMF_PORT", 00:20:50.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.806 "hdgst": ${hdgst:-false}, 00:20:50.806 "ddgst": ${ddgst:-false} 00:20:50.806 }, 00:20:50.806 "method": "bdev_nvme_attach_controller" 00:20:50.806 } 00:20:50.806 EOF 00:20:50.806 )") 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1329 -- # local fio_dir=/usr/src/fio 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:50.806 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1331 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1331 -- # local sanitizers 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1332 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # shift 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local asan_lib= 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # grep libasan 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:50.807 { 00:20:50.807 "params": { 00:20:50.807 "name": "Nvme$subsystem", 00:20:50.807 "trtype": "$TEST_TRANSPORT", 00:20:50.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.807 "adrfam": "ipv4", 00:20:50.807 "trsvcid": "$NVMF_PORT", 00:20:50.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.807 "hdgst": ${hdgst:-false}, 00:20:50.807 "ddgst": ${ddgst:-false} 00:20:50.807 }, 00:20:50.807 "method": "bdev_nvme_attach_controller" 00:20:50.807 } 00:20:50.807 EOF 00:20:50.807 )") 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:50.807 "params": { 00:20:50.807 "name": "Nvme0", 00:20:50.807 "trtype": "tcp", 00:20:50.807 "traddr": "10.0.0.3", 00:20:50.807 "adrfam": "ipv4", 00:20:50.807 "trsvcid": "4420", 00:20:50.807 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:50.807 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:50.807 "hdgst": false, 00:20:50.807 "ddgst": false 00:20:50.807 }, 00:20:50.807 "method": "bdev_nvme_attach_controller" 00:20:50.807 },{ 00:20:50.807 "params": { 00:20:50.807 "name": "Nvme1", 00:20:50.807 "trtype": "tcp", 00:20:50.807 "traddr": "10.0.0.3", 00:20:50.807 "adrfam": "ipv4", 00:20:50.807 "trsvcid": "4420", 00:20:50.807 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:50.807 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:50.807 "hdgst": false, 00:20:50.807 "ddgst": false 00:20:50.807 }, 00:20:50.807 "method": "bdev_nvme_attach_controller" 00:20:50.807 }' 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # asan_lib= 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # [[ -n '' ]] 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # grep libclang_rt.asan 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # asan_lib= 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # [[ -n '' ]] 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:50.807 08:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:50.807 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:50.807 ... 00:20:50.807 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:50.807 ... 00:20:50.807 fio-3.35 00:20:50.807 Starting 4 threads 00:20:56.078 00:20:56.078 filename0: (groupid=0, jobs=1): err= 0: pid=83719: Wed Nov 20 08:33:42 2024 00:20:56.078 read: IOPS=2278, BW=17.8MiB/s (18.7MB/s)(89.0MiB/5001msec) 00:20:56.078 slat (nsec): min=4807, max=63426, avg=10925.10, stdev=3753.14 00:20:56.078 clat (usec): min=639, max=10739, avg=3479.73, stdev=1203.59 00:20:56.078 lat (usec): min=647, max=10750, avg=3490.66, stdev=1203.76 00:20:56.078 clat percentiles (usec): 00:20:56.078 | 1.00th=[ 1369], 5.00th=[ 1385], 10.00th=[ 1401], 20.00th=[ 2900], 00:20:56.078 | 30.00th=[ 2966], 40.00th=[ 3294], 50.00th=[ 3818], 60.00th=[ 4015], 00:20:56.078 | 70.00th=[ 4047], 80.00th=[ 4228], 90.00th=[ 4883], 95.00th=[ 4948], 00:20:56.078 | 99.00th=[ 6456], 99.50th=[ 6718], 99.90th=[ 7963], 99.95th=[ 8160], 00:20:56.078 | 99.99th=[10290] 00:20:56.078 bw ( KiB/s): min=12992, max=21008, per=27.47%, avg=17953.78, stdev=3300.91, samples=9 00:20:56.078 iops : min= 1624, max= 2626, avg=2244.22, stdev=412.61, samples=9 00:20:56.078 lat (usec) : 750=0.07%, 1000=0.11% 00:20:56.078 lat (msec) : 2=17.60%, 4=39.58%, 10=42.63%, 20=0.02% 00:20:56.078 cpu : usr=91.16%, sys=7.84%, ctx=4, majf=0, minf=0 00:20:56.078 IO depths : 1=0.1%, 2=3.9%, 4=62.9%, 8=33.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:56.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.078 complete : 0=0.0%, 4=98.5%, 8=1.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.078 issued rwts: total=11396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.078 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:56.078 filename0: (groupid=0, jobs=1): err= 0: pid=83720: Wed Nov 20 08:33:42 2024 00:20:56.078 read: IOPS=1959, BW=15.3MiB/s (16.1MB/s)(76.6MiB/5001msec) 00:20:56.078 slat (nsec): min=7046, max=45110, avg=15364.24, stdev=3273.03 00:20:56.078 clat (usec): min=1095, max=10488, avg=4031.24, stdev=722.73 00:20:56.078 lat (usec): min=1104, max=10502, avg=4046.61, stdev=722.66 00:20:56.078 clat percentiles (usec): 00:20:56.078 | 1.00th=[ 2008], 5.00th=[ 2671], 10.00th=[ 3228], 20.00th=[ 3752], 00:20:56.078 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4178], 00:20:56.078 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4686], 95.00th=[ 4883], 00:20:56.078 | 99.00th=[ 6456], 99.50th=[ 6718], 99.90th=[ 8848], 99.95th=[10159], 00:20:56.078 | 99.99th=[10552] 00:20:56.078 bw ( KiB/s): min=14544, max=16864, per=24.07%, avg=15728.00, stdev=828.11, samples=9 00:20:56.078 iops : min= 1818, max= 2108, avg=1966.00, stdev=103.51, samples=9 00:20:56.078 lat (msec) : 2=0.93%, 4=26.16%, 10=72.86%, 20=0.05% 00:20:56.078 cpu : usr=92.70%, sys=6.48%, ctx=5, majf=0, minf=0 00:20:56.078 IO depths : 1=0.1%, 2=15.5%, 4=56.8%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:56.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.078 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.078 issued rwts: total=9799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.078 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:56.078 filename1: (groupid=0, jobs=1): err= 0: pid=83721: Wed Nov 20 08:33:42 2024 00:20:56.078 read: IOPS=1974, BW=15.4MiB/s (16.2MB/s)(77.1MiB/5002msec) 00:20:56.078 slat (nsec): min=7331, max=44859, avg=14590.46, stdev=3749.24 00:20:56.078 clat (usec): min=1280, max=10466, avg=4004.06, stdev=739.45 00:20:56.078 lat (usec): min=1289, max=10480, avg=4018.65, stdev=739.68 00:20:56.078 clat percentiles (usec): 00:20:56.078 | 1.00th=[ 2040], 5.00th=[ 2638], 10.00th=[ 2999], 20.00th=[ 3687], 00:20:56.078 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4178], 00:20:56.078 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4686], 95.00th=[ 4883], 00:20:56.078 | 99.00th=[ 6456], 99.50th=[ 6718], 99.90th=[ 8717], 99.95th=[10290], 00:20:56.078 | 99.99th=[10421] 00:20:56.078 bw ( KiB/s): min=14528, max=18144, per=24.29%, avg=15875.56, stdev=1117.82, samples=9 00:20:56.078 iops : min= 1816, max= 2268, avg=1984.44, stdev=139.73, samples=9 00:20:56.078 lat (msec) : 2=0.79%, 4=27.68%, 10=71.48%, 20=0.05% 00:20:56.078 cpu : usr=92.28%, sys=6.90%, ctx=4, majf=0, minf=0 00:20:56.078 IO depths : 1=0.1%, 2=14.8%, 4=57.1%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:56.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.078 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.078 issued rwts: total=9874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.078 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:56.078 filename1: (groupid=0, jobs=1): err= 0: pid=83722: Wed Nov 20 08:33:42 2024 00:20:56.078 read: IOPS=1957, BW=15.3MiB/s (16.0MB/s)(76.5MiB/5001msec) 00:20:56.078 slat (nsec): min=7446, max=46648, avg=15497.31, stdev=3420.53 00:20:56.078 clat (usec): min=1281, max=10489, avg=4032.88, stdev=719.41 00:20:56.078 lat (usec): min=1294, max=10503, avg=4048.37, stdev=719.04 00:20:56.078 clat percentiles (usec): 00:20:56.078 | 1.00th=[ 2024], 5.00th=[ 2704], 10.00th=[ 3228], 20.00th=[ 3752], 00:20:56.078 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4178], 00:20:56.078 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4686], 95.00th=[ 4883], 00:20:56.078 | 99.00th=[ 6456], 99.50th=[ 6718], 99.90th=[ 8848], 99.95th=[10159], 00:20:56.078 | 99.99th=[10552] 00:20:56.078 bw ( KiB/s): min=14544, max=16848, per=24.06%, avg=15724.22, stdev=822.33, samples=9 00:20:56.078 iops : min= 1818, max= 2106, avg=1965.44, stdev=102.67, samples=9 00:20:56.078 lat (msec) : 2=0.86%, 4=26.18%, 10=72.91%, 20=0.05% 00:20:56.078 cpu : usr=91.82%, sys=7.32%, ctx=6, majf=0, minf=1 00:20:56.078 IO depths : 1=0.1%, 2=15.4%, 4=56.8%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:56.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.078 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.078 issued rwts: total=9791,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.078 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:56.078 00:20:56.078 Run status group 0 (all jobs): 00:20:56.078 READ: bw=63.8MiB/s (66.9MB/s), 15.3MiB/s-17.8MiB/s (16.0MB/s-18.7MB/s), io=319MiB (335MB), run=5001-5002msec 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.078 ************************************ 00:20:56.078 END TEST fio_dif_rand_params 00:20:56.078 ************************************ 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:56.078 00:20:56.078 real 0m23.774s 00:20:56.078 user 2m4.429s 00:20:56.078 sys 0m7.467s 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1133 -- # xtrace_disable 00:20:56.078 08:33:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.078 08:33:43 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:56.078 08:33:43 nvmf_dif -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:20:56.078 08:33:43 nvmf_dif -- common/autotest_common.sh@1114 -- # xtrace_disable 00:20:56.078 08:33:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:56.078 ************************************ 00:20:56.078 START TEST fio_dif_digest 00:20:56.078 ************************************ 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1132 -- # fio_dif_digest 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:56.078 bdev_null0 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@566 -- # xtrace_disable 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:56.078 [2024-11-20 08:33:43.122862] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.078 { 00:20:56.078 "params": { 00:20:56.078 "name": "Nvme$subsystem", 00:20:56.078 "trtype": "$TEST_TRANSPORT", 00:20:56.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.078 "adrfam": "ipv4", 00:20:56.078 "trsvcid": "$NVMF_PORT", 00:20:56.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.078 "hdgst": ${hdgst:-false}, 00:20:56.078 "ddgst": ${ddgst:-false} 00:20:56.078 }, 00:20:56.078 "method": "bdev_nvme_attach_controller" 00:20:56.078 } 00:20:56.078 EOF 00:20:56.078 )") 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1329 -- # local fio_dir=/usr/src/fio 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1331 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1331 -- # local sanitizers 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1332 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # shift 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local asan_lib= 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # grep libasan 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:20:56.078 08:33:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:56.078 "params": { 00:20:56.078 "name": "Nvme0", 00:20:56.078 "trtype": "tcp", 00:20:56.078 "traddr": "10.0.0.3", 00:20:56.078 "adrfam": "ipv4", 00:20:56.078 "trsvcid": "4420", 00:20:56.078 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:56.078 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:56.078 "hdgst": true, 00:20:56.078 "ddgst": true 00:20:56.078 }, 00:20:56.078 "method": "bdev_nvme_attach_controller" 00:20:56.079 }' 00:20:56.079 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # asan_lib= 00:20:56.079 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # [[ -n '' ]] 00:20:56.079 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # for sanitizer in "${sanitizers[@]}" 00:20:56.079 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:56.079 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # grep libclang_rt.asan 00:20:56.079 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # awk '{print $3}' 00:20:56.079 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # asan_lib= 00:20:56.079 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # [[ -n '' ]] 00:20:56.079 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:56.079 08:33:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:56.079 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:56.079 ... 00:20:56.079 fio-3.35 00:20:56.079 Starting 3 threads 00:21:08.318 00:21:08.318 filename0: (groupid=0, jobs=1): err= 0: pid=83828: Wed Nov 20 08:33:53 2024 00:21:08.318 read: IOPS=227, BW=28.5MiB/s (29.9MB/s)(285MiB/10007msec) 00:21:08.318 slat (nsec): min=7426, max=51256, avg=15501.75, stdev=5385.98 00:21:08.318 clat (usec): min=7320, max=14255, avg=13133.17, stdev=298.06 00:21:08.318 lat (usec): min=7329, max=14268, avg=13148.67, stdev=298.30 00:21:08.318 clat percentiles (usec): 00:21:08.318 | 1.00th=[12387], 5.00th=[12780], 10.00th=[12911], 20.00th=[13042], 00:21:08.318 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13173], 60.00th=[13173], 00:21:08.318 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13435], 95.00th=[13435], 00:21:08.318 | 99.00th=[13698], 99.50th=[13829], 99.90th=[14222], 99.95th=[14222], 00:21:08.318 | 99.99th=[14222] 00:21:08.318 bw ( KiB/s): min=28416, max=29952, per=33.32%, avg=29143.58, stdev=310.77, samples=19 00:21:08.318 iops : min= 222, max= 234, avg=227.68, stdev= 2.43, samples=19 00:21:08.318 lat (msec) : 10=0.13%, 20=99.87% 00:21:08.318 cpu : usr=91.10%, sys=8.38%, ctx=5, majf=0, minf=0 00:21:08.318 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:08.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.318 issued rwts: total=2280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:08.318 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:08.318 filename0: (groupid=0, jobs=1): err= 0: pid=83829: Wed Nov 20 08:33:53 2024 00:21:08.318 read: IOPS=227, BW=28.5MiB/s (29.9MB/s)(285MiB/10009msec) 00:21:08.318 slat (nsec): min=7638, max=52264, avg=16461.14, stdev=4870.43 00:21:08.319 clat (usec): min=9271, max=14242, avg=13133.28, stdev=252.39 00:21:08.319 lat (usec): min=9285, max=14262, avg=13149.74, stdev=252.60 00:21:08.319 clat percentiles (usec): 00:21:08.319 | 1.00th=[12387], 5.00th=[12780], 10.00th=[12911], 20.00th=[13042], 00:21:08.319 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13173], 60.00th=[13173], 00:21:08.319 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13435], 95.00th=[13435], 00:21:08.319 | 99.00th=[13698], 99.50th=[13829], 99.90th=[14222], 99.95th=[14222], 00:21:08.319 | 99.99th=[14222] 00:21:08.319 bw ( KiB/s): min=28416, max=29952, per=33.32%, avg=29142.65, stdev=391.92, samples=20 00:21:08.319 iops : min= 222, max= 234, avg=227.65, stdev= 3.07, samples=20 00:21:08.319 lat (msec) : 10=0.13%, 20=99.87% 00:21:08.319 cpu : usr=90.89%, sys=8.59%, ctx=6, majf=0, minf=0 00:21:08.319 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:08.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.319 issued rwts: total=2280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:08.319 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:08.319 filename0: (groupid=0, jobs=1): err= 0: pid=83830: Wed Nov 20 08:33:53 2024 00:21:08.319 read: IOPS=227, BW=28.5MiB/s (29.9MB/s)(285MiB/10009msec) 00:21:08.319 slat (usec): min=7, max=112, avg=16.88, stdev= 5.68 00:21:08.319 clat (usec): min=9285, max=14235, avg=13130.48, stdev=251.89 00:21:08.319 lat (usec): min=9298, max=14259, avg=13147.36, stdev=252.05 00:21:08.319 clat percentiles (usec): 00:21:08.319 | 1.00th=[12387], 5.00th=[12780], 10.00th=[12911], 20.00th=[13042], 00:21:08.319 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13173], 60.00th=[13173], 00:21:08.319 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13435], 95.00th=[13435], 00:21:08.319 | 99.00th=[13698], 99.50th=[13829], 99.90th=[14222], 99.95th=[14222], 00:21:08.319 | 99.99th=[14222] 00:21:08.319 bw ( KiB/s): min=28416, max=29952, per=33.32%, avg=29145.45, stdev=386.62, samples=20 00:21:08.319 iops : min= 222, max= 234, avg=227.65, stdev= 3.07, samples=20 00:21:08.319 lat (msec) : 10=0.13%, 20=99.87% 00:21:08.319 cpu : usr=90.56%, sys=8.50%, ctx=110, majf=0, minf=0 00:21:08.319 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:08.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.319 issued rwts: total=2280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:08.319 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:08.319 00:21:08.319 Run status group 0 (all jobs): 00:21:08.319 READ: bw=85.4MiB/s (89.6MB/s), 28.5MiB/s-28.5MiB/s (29.9MB/s-29.9MB/s), io=855MiB (897MB), run=10007-10009msec 00:21:08.319 08:33:54 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:08.319 08:33:54 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:08.319 08:33:54 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:08.319 08:33:54 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:08.319 08:33:54 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:08.319 08:33:54 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:08.319 08:33:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@566 -- # xtrace_disable 00:21:08.319 08:33:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:08.319 08:33:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:21:08.319 08:33:54 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:08.319 08:33:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@566 -- # xtrace_disable 00:21:08.319 08:33:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:08.319 ************************************ 00:21:08.319 END TEST fio_dif_digest 00:21:08.319 ************************************ 00:21:08.319 08:33:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:21:08.319 00:21:08.319 real 0m11.109s 00:21:08.319 user 0m28.032s 00:21:08.319 sys 0m2.809s 00:21:08.319 08:33:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1133 -- # xtrace_disable 00:21:08.319 08:33:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:08.319 08:33:54 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:08.319 08:33:54 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:08.319 08:33:54 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:08.319 08:33:54 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:21:08.319 08:33:54 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:08.319 08:33:54 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:21:08.319 08:33:54 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:08.319 08:33:54 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:08.319 rmmod nvme_tcp 00:21:08.319 rmmod nvme_fabrics 00:21:08.319 rmmod nvme_keyring 00:21:08.319 08:33:54 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:08.319 08:33:54 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:21:08.319 08:33:54 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:21:08.319 08:33:54 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 83082 ']' 00:21:08.319 08:33:54 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 83082 00:21:08.319 08:33:54 nvmf_dif -- common/autotest_common.sh@957 -- # '[' -z 83082 ']' 00:21:08.319 08:33:54 nvmf_dif -- common/autotest_common.sh@961 -- # kill -0 83082 00:21:08.319 08:33:54 nvmf_dif -- common/autotest_common.sh@962 -- # uname 00:21:08.319 08:33:54 nvmf_dif -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:21:08.319 08:33:54 nvmf_dif -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 83082 00:21:08.319 killing process with pid 83082 00:21:08.319 08:33:54 nvmf_dif -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:21:08.319 08:33:54 nvmf_dif -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:21:08.319 08:33:54 nvmf_dif -- common/autotest_common.sh@975 -- # echo 'killing process with pid 83082' 00:21:08.319 08:33:54 nvmf_dif -- common/autotest_common.sh@976 -- # kill 83082 00:21:08.319 08:33:54 nvmf_dif -- common/autotest_common.sh@981 -- # wait 83082 00:21:08.319 08:33:54 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:21:08.319 08:33:54 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:08.319 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:08.319 Waiting for block devices as requested 00:21:08.319 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:08.319 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.319 08:33:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:08.319 08:33:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.319 08:33:55 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:21:08.319 00:21:08.319 real 1m0.240s 00:21:08.319 user 3m48.851s 00:21:08.319 sys 0m19.281s 00:21:08.319 08:33:55 nvmf_dif -- common/autotest_common.sh@1133 -- # xtrace_disable 00:21:08.319 ************************************ 00:21:08.319 END TEST nvmf_dif 00:21:08.319 ************************************ 00:21:08.320 08:33:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:08.320 08:33:55 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:08.320 08:33:55 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:21:08.320 08:33:55 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:21:08.320 08:33:55 -- common/autotest_common.sh@10 -- # set +x 00:21:08.320 ************************************ 00:21:08.320 START TEST nvmf_abort_qd_sizes 00:21:08.320 ************************************ 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:08.320 * Looking for test storage... 00:21:08.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1638 -- # lcov --version 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:21:08.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.320 --rc genhtml_branch_coverage=1 00:21:08.320 --rc genhtml_function_coverage=1 00:21:08.320 --rc genhtml_legend=1 00:21:08.320 --rc geninfo_all_blocks=1 00:21:08.320 --rc geninfo_unexecuted_blocks=1 00:21:08.320 00:21:08.320 ' 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:21:08.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.320 --rc genhtml_branch_coverage=1 00:21:08.320 --rc genhtml_function_coverage=1 00:21:08.320 --rc genhtml_legend=1 00:21:08.320 --rc geninfo_all_blocks=1 00:21:08.320 --rc geninfo_unexecuted_blocks=1 00:21:08.320 00:21:08.320 ' 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:21:08.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.320 --rc genhtml_branch_coverage=1 00:21:08.320 --rc genhtml_function_coverage=1 00:21:08.320 --rc genhtml_legend=1 00:21:08.320 --rc geninfo_all_blocks=1 00:21:08.320 --rc geninfo_unexecuted_blocks=1 00:21:08.320 00:21:08.320 ' 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:21:08.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.320 --rc genhtml_branch_coverage=1 00:21:08.320 --rc genhtml_function_coverage=1 00:21:08.320 --rc genhtml_legend=1 00:21:08.320 --rc geninfo_all_blocks=1 00:21:08.320 --rc geninfo_unexecuted_blocks=1 00:21:08.320 00:21:08.320 ' 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:08.320 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:08.321 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:08.321 Cannot find device "nvmf_init_br" 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:08.321 Cannot find device "nvmf_init_br2" 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:08.321 Cannot find device "nvmf_tgt_br" 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:08.321 Cannot find device "nvmf_tgt_br2" 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:08.321 Cannot find device "nvmf_init_br" 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:08.321 Cannot find device "nvmf_init_br2" 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:08.321 Cannot find device "nvmf_tgt_br" 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:08.321 Cannot find device "nvmf_tgt_br2" 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:08.321 Cannot find device "nvmf_br" 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:08.321 Cannot find device "nvmf_init_if" 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:21:08.321 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:08.581 Cannot find device "nvmf_init_if2" 00:21:08.581 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:21:08.581 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:08.581 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:08.581 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:21:08.581 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:08.581 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:08.581 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:21:08.581 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:08.581 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:08.581 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:08.581 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:08.581 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:08.581 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:08.581 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:08.581 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:08.581 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:08.581 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:08.581 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:08.581 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:08.581 08:33:55 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:08.581 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:08.581 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:08.581 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:08.581 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:08.581 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:08.581 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:08.581 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:08.581 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:08.581 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:08.581 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:08.581 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:08.581 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:08.581 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:08.581 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:08.581 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:08.581 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:08.581 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:08.581 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:08.581 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:08.581 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:08.581 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:08.581 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:21:08.581 00:21:08.581 --- 10.0.0.3 ping statistics --- 00:21:08.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.581 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:21:08.581 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:08.581 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:08.581 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:21:08.581 00:21:08.581 --- 10.0.0.4 ping statistics --- 00:21:08.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.581 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:21:08.581 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:08.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:21:08.840 00:21:08.840 --- 10.0.0.1 ping statistics --- 00:21:08.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.840 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:21:08.840 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:08.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:21:08.841 00:21:08.841 --- 10.0.0.2 ping statistics --- 00:21:08.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.841 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:21:08.841 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.841 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:21:08.841 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:21:08.841 08:33:56 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:09.408 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:09.408 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:09.408 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:09.666 08:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:09.666 08:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:09.666 08:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:09.666 08:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:09.666 08:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:09.666 08:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:09.666 08:33:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:09.666 08:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:09.666 08:33:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:09.666 08:33:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:09.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.666 08:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84482 00:21:09.666 08:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:09.666 08:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84482 00:21:09.666 08:33:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # '[' -z 84482 ']' 00:21:09.666 08:33:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.666 08:33:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@843 -- # local max_retries=100 00:21:09.666 08:33:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.666 08:33:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@847 -- # xtrace_disable 00:21:09.666 08:33:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:09.666 [2024-11-20 08:33:57.094932] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:21:09.666 [2024-11-20 08:33:57.095216] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.924 [2024-11-20 08:33:57.252919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:09.924 [2024-11-20 08:33:57.319652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.924 [2024-11-20 08:33:57.320143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.924 [2024-11-20 08:33:57.320449] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:09.924 [2024-11-20 08:33:57.320768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:09.924 [2024-11-20 08:33:57.321005] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.924 [2024-11-20 08:33:57.322547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.924 [2024-11-20 08:33:57.322702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:09.924 [2024-11-20 08:33:57.322779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:09.924 [2024-11-20 08:33:57.322780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.924 [2024-11-20 08:33:57.382656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:09.924 08:33:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:21:09.924 08:33:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@871 -- # return 0 00:21:09.924 08:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:09.924 08:33:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@735 -- # xtrace_disable 00:21:09.924 08:33:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:10.183 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:10.184 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:10.184 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:10.184 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:21:10.184 08:33:57 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:10.184 08:33:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:10.184 08:33:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:10.184 08:33:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:10.184 08:33:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:21:10.184 08:33:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1114 -- # xtrace_disable 00:21:10.184 08:33:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:10.184 ************************************ 00:21:10.184 START TEST spdk_target_abort 00:21:10.184 ************************************ 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1132 -- # spdk_target 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@566 -- # xtrace_disable 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:10.184 spdk_targetn1 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@566 -- # xtrace_disable 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:10.184 [2024-11-20 08:33:57.615654] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@566 -- # xtrace_disable 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@566 -- # xtrace_disable 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@566 -- # xtrace_disable 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:10.184 [2024-11-20 08:33:57.652827] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:10.184 08:33:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:13.470 Initializing NVMe Controllers 00:21:13.470 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:13.470 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:13.470 Initialization complete. Launching workers. 00:21:13.470 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10962, failed: 0 00:21:13.470 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1040, failed to submit 9922 00:21:13.470 success 760, unsuccessful 280, failed 0 00:21:13.470 08:34:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:13.470 08:34:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:16.819 Initializing NVMe Controllers 00:21:16.819 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:16.819 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:16.819 Initialization complete. Launching workers. 00:21:16.819 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8952, failed: 0 00:21:16.819 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1158, failed to submit 7794 00:21:16.819 success 411, unsuccessful 747, failed 0 00:21:16.819 08:34:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:16.820 08:34:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:20.107 Initializing NVMe Controllers 00:21:20.107 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:20.107 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:20.107 Initialization complete. Launching workers. 00:21:20.107 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31750, failed: 0 00:21:20.108 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2311, failed to submit 29439 00:21:20.108 success 406, unsuccessful 1905, failed 0 00:21:20.108 08:34:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:20.108 08:34:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@566 -- # xtrace_disable 00:21:20.108 08:34:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:20.108 08:34:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:21:20.108 08:34:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:20.108 08:34:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@566 -- # xtrace_disable 00:21:20.108 08:34:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:20.675 08:34:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:21:20.675 08:34:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84482 00:21:20.675 08:34:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' -z 84482 ']' 00:21:20.676 08:34:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@961 -- # kill -0 84482 00:21:20.676 08:34:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # uname 00:21:20.676 08:34:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:21:20.676 08:34:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 84482 00:21:20.676 killing process with pid 84482 00:21:20.676 08:34:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:21:20.676 08:34:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:21:20.676 08:34:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@975 -- # echo 'killing process with pid 84482' 00:21:20.676 08:34:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # kill 84482 00:21:20.676 08:34:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@981 -- # wait 84482 00:21:20.934 00:21:20.934 real 0m10.841s 00:21:20.934 user 0m41.484s 00:21:20.934 sys 0m2.122s 00:21:20.934 08:34:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1133 -- # xtrace_disable 00:21:20.934 ************************************ 00:21:20.934 END TEST spdk_target_abort 00:21:20.934 ************************************ 00:21:20.934 08:34:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:20.934 08:34:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:20.934 08:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:21:20.934 08:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1114 -- # xtrace_disable 00:21:20.934 08:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:20.934 ************************************ 00:21:20.934 START TEST kernel_target_abort 00:21:20.934 ************************************ 00:21:20.934 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1132 -- # kernel_target 00:21:20.934 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:20.934 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:21:20.934 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:20.934 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:20.934 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:20.934 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:20.934 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:20.934 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:20.934 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:20.934 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:20.934 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:20.934 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:20.934 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:20.934 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:20.934 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:20.935 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:20.935 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:20.935 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:21:20.935 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:20.935 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:20.935 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:20.935 08:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:21.517 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:21.517 Waiting for block devices as requested 00:21:21.517 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:21.517 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:21.517 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:21.517 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:21.517 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:21.517 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1595 -- # local device=nvme0n1 00:21:21.517 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:21.517 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:21:21.517 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:21.517 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:21.517 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:21.783 No valid GPT data, bailing 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1595 -- # local device=nvme0n2 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:21.783 No valid GPT data, bailing 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1595 -- # local device=nvme0n3 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:21.783 No valid GPT data, bailing 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1595 -- # local device=nvme1n1 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1597 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1598 -- # [[ none != none ]] 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:21.783 No valid GPT data, bailing 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:21.783 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:21:21.784 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:21:21.784 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:21:21.784 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:21:21.784 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:21:21.784 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:21:21.784 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:21:21.784 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:22.043 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 --hostid=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 -a 10.0.0.1 -t tcp -s 4420 00:21:22.043 00:21:22.043 Discovery Log Number of Records 2, Generation counter 2 00:21:22.043 =====Discovery Log Entry 0====== 00:21:22.043 trtype: tcp 00:21:22.043 adrfam: ipv4 00:21:22.043 subtype: current discovery subsystem 00:21:22.043 treq: not specified, sq flow control disable supported 00:21:22.043 portid: 1 00:21:22.043 trsvcid: 4420 00:21:22.043 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:22.043 traddr: 10.0.0.1 00:21:22.043 eflags: none 00:21:22.043 sectype: none 00:21:22.043 =====Discovery Log Entry 1====== 00:21:22.043 trtype: tcp 00:21:22.043 adrfam: ipv4 00:21:22.043 subtype: nvme subsystem 00:21:22.043 treq: not specified, sq flow control disable supported 00:21:22.043 portid: 1 00:21:22.043 trsvcid: 4420 00:21:22.043 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:22.043 traddr: 10.0.0.1 00:21:22.043 eflags: none 00:21:22.043 sectype: none 00:21:22.043 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:22.043 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:22.043 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:22.043 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:22.043 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:22.043 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:22.043 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:22.043 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:22.043 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:22.043 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:22.043 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:22.043 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:22.043 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:22.043 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:22.043 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:22.043 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:22.043 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:22.043 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:22.043 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:22.043 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:22.043 08:34:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:25.329 Initializing NVMe Controllers 00:21:25.329 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:25.330 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:25.330 Initialization complete. Launching workers. 00:21:25.330 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34791, failed: 0 00:21:25.330 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34791, failed to submit 0 00:21:25.330 success 0, unsuccessful 34791, failed 0 00:21:25.330 08:34:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:25.330 08:34:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:28.635 Initializing NVMe Controllers 00:21:28.635 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:28.635 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:28.635 Initialization complete. Launching workers. 00:21:28.635 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 70893, failed: 0 00:21:28.635 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31341, failed to submit 39552 00:21:28.635 success 0, unsuccessful 31341, failed 0 00:21:28.635 08:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:28.635 08:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:31.946 Initializing NVMe Controllers 00:21:31.946 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:31.946 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:31.946 Initialization complete. Launching workers. 00:21:31.946 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 84855, failed: 0 00:21:31.946 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21224, failed to submit 63631 00:21:31.946 success 0, unsuccessful 21224, failed 0 00:21:31.946 08:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:31.946 08:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:31.946 08:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:21:31.946 08:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:31.946 08:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:31.946 08:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:31.946 08:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:31.946 08:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:31.946 08:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:21:31.946 08:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:32.204 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:37.473 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:37.473 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:37.473 ************************************ 00:21:37.473 END TEST kernel_target_abort 00:21:37.473 ************************************ 00:21:37.473 00:21:37.473 real 0m15.700s 00:21:37.473 user 0m6.375s 00:21:37.473 sys 0m6.774s 00:21:37.473 08:34:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1133 -- # xtrace_disable 00:21:37.473 08:34:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:37.473 rmmod nvme_tcp 00:21:37.473 rmmod nvme_fabrics 00:21:37.473 rmmod nvme_keyring 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84482 ']' 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84482 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@957 -- # '[' -z 84482 ']' 00:21:37.473 Process with pid 84482 is not found 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@961 -- # kill -0 84482 00:21:37.473 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 961: kill: (84482) - No such process 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@984 -- # echo 'Process with pid 84482 is not found' 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:37.473 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:37.473 Waiting for block devices as requested 00:21:37.473 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:37.473 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:37.473 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:37.474 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:37.474 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:37.474 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:37.474 08:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:37.474 08:34:25 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:37.474 08:34:25 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.474 08:34:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:37.474 08:34:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.732 08:34:25 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:21:37.732 00:21:37.732 real 0m29.566s 00:21:37.732 user 0m49.018s 00:21:37.732 sys 0m10.352s 00:21:37.732 08:34:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1133 -- # xtrace_disable 00:21:37.732 08:34:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:37.732 ************************************ 00:21:37.732 END TEST nvmf_abort_qd_sizes 00:21:37.732 ************************************ 00:21:37.732 08:34:25 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:37.732 08:34:25 -- common/autotest_common.sh@1108 -- # '[' 2 -le 1 ']' 00:21:37.732 08:34:25 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:21:37.732 08:34:25 -- common/autotest_common.sh@10 -- # set +x 00:21:37.732 ************************************ 00:21:37.732 START TEST keyring_file 00:21:37.732 ************************************ 00:21:37.732 08:34:25 keyring_file -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:37.732 * Looking for test storage... 00:21:37.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:37.732 08:34:25 keyring_file -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:21:37.732 08:34:25 keyring_file -- common/autotest_common.sh@1638 -- # lcov --version 00:21:37.732 08:34:25 keyring_file -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:21:37.991 08:34:25 keyring_file -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@345 -- # : 1 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@353 -- # local d=1 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@355 -- # echo 1 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@353 -- # local d=2 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@355 -- # echo 2 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:37.991 08:34:25 keyring_file -- scripts/common.sh@368 -- # return 0 00:21:37.991 08:34:25 keyring_file -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:37.991 08:34:25 keyring_file -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:21:37.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.991 --rc genhtml_branch_coverage=1 00:21:37.991 --rc genhtml_function_coverage=1 00:21:37.991 --rc genhtml_legend=1 00:21:37.991 --rc geninfo_all_blocks=1 00:21:37.991 --rc geninfo_unexecuted_blocks=1 00:21:37.991 00:21:37.991 ' 00:21:37.991 08:34:25 keyring_file -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:21:37.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.991 --rc genhtml_branch_coverage=1 00:21:37.991 --rc genhtml_function_coverage=1 00:21:37.991 --rc genhtml_legend=1 00:21:37.991 --rc geninfo_all_blocks=1 00:21:37.991 --rc geninfo_unexecuted_blocks=1 00:21:37.991 00:21:37.991 ' 00:21:37.991 08:34:25 keyring_file -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:21:37.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.991 --rc genhtml_branch_coverage=1 00:21:37.991 --rc genhtml_function_coverage=1 00:21:37.991 --rc genhtml_legend=1 00:21:37.991 --rc geninfo_all_blocks=1 00:21:37.991 --rc geninfo_unexecuted_blocks=1 00:21:37.991 00:21:37.991 ' 00:21:37.991 08:34:25 keyring_file -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:21:37.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.992 --rc genhtml_branch_coverage=1 00:21:37.992 --rc genhtml_function_coverage=1 00:21:37.992 --rc genhtml_legend=1 00:21:37.992 --rc geninfo_all_blocks=1 00:21:37.992 --rc geninfo_unexecuted_blocks=1 00:21:37.992 00:21:37.992 ' 00:21:37.992 08:34:25 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:37.992 08:34:25 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:37.992 08:34:25 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:21:37.992 08:34:25 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.992 08:34:25 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.992 08:34:25 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.992 08:34:25 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.992 08:34:25 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.992 08:34:25 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.992 08:34:25 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:37.992 08:34:25 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@51 -- # : 0 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:37.992 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:37.992 08:34:25 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:37.992 08:34:25 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:37.992 08:34:25 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:37.992 08:34:25 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:37.992 08:34:25 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:37.992 08:34:25 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:37.992 08:34:25 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:37.992 08:34:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:37.992 08:34:25 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:37.992 08:34:25 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:37.992 08:34:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:37.992 08:34:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:37.992 08:34:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0kHUWevv3x 00:21:37.992 08:34:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:37.992 08:34:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0kHUWevv3x 00:21:37.992 08:34:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0kHUWevv3x 00:21:37.992 08:34:25 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.0kHUWevv3x 00:21:37.992 08:34:25 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:37.992 08:34:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:37.992 08:34:25 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:37.992 08:34:25 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:37.992 08:34:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:37.992 08:34:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:37.992 08:34:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.zXAeWLPciK 00:21:37.992 08:34:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:37.992 08:34:25 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:37.992 08:34:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.zXAeWLPciK 00:21:37.992 08:34:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.zXAeWLPciK 00:21:37.992 08:34:25 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.zXAeWLPciK 00:21:37.992 08:34:25 keyring_file -- keyring/file.sh@30 -- # tgtpid=85400 00:21:37.992 08:34:25 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:37.992 08:34:25 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85400 00:21:37.992 08:34:25 keyring_file -- common/autotest_common.sh@838 -- # '[' -z 85400 ']' 00:21:37.992 08:34:25 keyring_file -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.992 08:34:25 keyring_file -- common/autotest_common.sh@843 -- # local max_retries=100 00:21:37.992 08:34:25 keyring_file -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.992 08:34:25 keyring_file -- common/autotest_common.sh@847 -- # xtrace_disable 00:21:37.992 08:34:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:37.992 [2024-11-20 08:34:25.533882] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:21:37.992 [2024-11-20 08:34:25.533982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85400 ] 00:21:38.251 [2024-11-20 08:34:25.683361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.251 [2024-11-20 08:34:25.748389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.510 [2024-11-20 08:34:25.822901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@871 -- # return 0 00:21:39.080 08:34:26 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@566 -- # xtrace_disable 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:39.080 [2024-11-20 08:34:26.512680] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.080 null0 00:21:39.080 [2024-11-20 08:34:26.544642] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:39.080 [2024-11-20 08:34:26.544798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:21:39.080 08:34:26 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@655 -- # local es=0 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@657 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@643 -- # local arg=rpc_cmd 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@647 -- # type -t rpc_cmd 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@658 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@566 -- # xtrace_disable 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:39.080 [2024-11-20 08:34:26.576640] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:39.080 request: 00:21:39.080 { 00:21:39.080 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:39.080 "secure_channel": false, 00:21:39.080 "listen_address": { 00:21:39.080 "trtype": "tcp", 00:21:39.080 "traddr": "127.0.0.1", 00:21:39.080 "trsvcid": "4420" 00:21:39.080 }, 00:21:39.080 "method": "nvmf_subsystem_add_listener", 00:21:39.080 "req_id": 1 00:21:39.080 } 00:21:39.080 Got JSON-RPC error response 00:21:39.080 response: 00:21:39.080 { 00:21:39.080 "code": -32602, 00:21:39.080 "message": "Invalid parameters" 00:21:39.080 } 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@594 -- # [[ 1 == 0 ]] 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@658 -- # es=1 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:21:39.080 08:34:26 keyring_file -- keyring/file.sh@47 -- # bperfpid=85416 00:21:39.080 08:34:26 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:39.080 08:34:26 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85416 /var/tmp/bperf.sock 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@838 -- # '[' -z 85416 ']' 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@843 -- # local max_retries=100 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:39.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@847 -- # xtrace_disable 00:21:39.080 08:34:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:39.080 [2024-11-20 08:34:26.635967] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:21:39.081 [2024-11-20 08:34:26.636235] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85416 ] 00:21:39.340 [2024-11-20 08:34:26.784408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.340 [2024-11-20 08:34:26.882102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.600 [2024-11-20 08:34:26.944416] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:40.168 08:34:27 keyring_file -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:21:40.168 08:34:27 keyring_file -- common/autotest_common.sh@871 -- # return 0 00:21:40.168 08:34:27 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0kHUWevv3x 00:21:40.168 08:34:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0kHUWevv3x 00:21:40.426 08:34:27 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.zXAeWLPciK 00:21:40.426 08:34:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.zXAeWLPciK 00:21:40.994 08:34:28 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:40.994 08:34:28 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:21:40.994 08:34:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:40.994 08:34:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:40.994 08:34:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:41.253 08:34:28 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.0kHUWevv3x == \/\t\m\p\/\t\m\p\.\0\k\H\U\W\e\v\v\3\x ]] 00:21:41.253 08:34:28 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:21:41.253 08:34:28 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:21:41.253 08:34:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:41.253 08:34:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:41.253 08:34:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:41.512 08:34:28 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.zXAeWLPciK == \/\t\m\p\/\t\m\p\.\z\X\A\e\W\L\P\c\i\K ]] 00:21:41.512 08:34:28 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:21:41.512 08:34:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:41.512 08:34:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:41.512 08:34:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:41.512 08:34:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:41.512 08:34:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:41.779 08:34:29 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:41.779 08:34:29 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:21:41.779 08:34:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:41.779 08:34:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:41.779 08:34:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:41.779 08:34:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:41.779 08:34:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:42.039 08:34:29 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:21:42.039 08:34:29 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:42.039 08:34:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:42.297 [2024-11-20 08:34:29.770861] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:42.297 nvme0n1 00:21:42.297 08:34:29 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:21:42.556 08:34:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:42.556 08:34:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:42.556 08:34:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:42.556 08:34:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:42.556 08:34:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:42.837 08:34:30 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:21:42.837 08:34:30 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:21:42.837 08:34:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:42.837 08:34:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:42.837 08:34:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:42.837 08:34:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:42.837 08:34:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:43.099 08:34:30 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:21:43.099 08:34:30 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:43.099 Running I/O for 1 seconds... 00:21:44.031 11909.00 IOPS, 46.52 MiB/s 00:21:44.031 Latency(us) 00:21:44.031 [2024-11-20T08:34:31.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.031 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:44.031 nvme0n1 : 1.01 11961.94 46.73 0.00 0.00 10672.27 4170.47 22043.93 00:21:44.031 [2024-11-20T08:34:31.592Z] =================================================================================================================== 00:21:44.031 [2024-11-20T08:34:31.592Z] Total : 11961.94 46.73 0.00 0.00 10672.27 4170.47 22043.93 00:21:44.031 { 00:21:44.031 "results": [ 00:21:44.031 { 00:21:44.031 "job": "nvme0n1", 00:21:44.031 "core_mask": "0x2", 00:21:44.031 "workload": "randrw", 00:21:44.031 "percentage": 50, 00:21:44.031 "status": "finished", 00:21:44.031 "queue_depth": 128, 00:21:44.031 "io_size": 4096, 00:21:44.031 "runtime": 1.006442, 00:21:44.031 "iops": 11961.941174950965, 00:21:44.031 "mibps": 46.72633271465221, 00:21:44.031 "io_failed": 0, 00:21:44.031 "io_timeout": 0, 00:21:44.031 "avg_latency_us": 10672.26658088485, 00:21:44.031 "min_latency_us": 4170.472727272727, 00:21:44.031 "max_latency_us": 22043.927272727273 00:21:44.031 } 00:21:44.031 ], 00:21:44.031 "core_count": 1 00:21:44.031 } 00:21:44.031 08:34:31 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:44.031 08:34:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:44.290 08:34:31 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:21:44.290 08:34:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:44.290 08:34:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:44.290 08:34:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:44.290 08:34:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:44.290 08:34:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:44.857 08:34:32 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:44.857 08:34:32 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:21:44.857 08:34:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:44.857 08:34:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:44.857 08:34:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:44.857 08:34:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:44.857 08:34:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:44.857 08:34:32 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:21:44.857 08:34:32 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:44.857 08:34:32 keyring_file -- common/autotest_common.sh@655 -- # local es=0 00:21:44.857 08:34:32 keyring_file -- common/autotest_common.sh@657 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:44.857 08:34:32 keyring_file -- common/autotest_common.sh@643 -- # local arg=bperf_cmd 00:21:44.857 08:34:32 keyring_file -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:21:44.857 08:34:32 keyring_file -- common/autotest_common.sh@647 -- # type -t bperf_cmd 00:21:44.857 08:34:32 keyring_file -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:21:44.857 08:34:32 keyring_file -- common/autotest_common.sh@658 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:44.857 08:34:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:45.424 [2024-11-20 08:34:32.680461] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:45.424 [2024-11-20 08:34:32.680958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb6770 (107): Transport endpoint is not connected 00:21:45.424 [2024-11-20 08:34:32.681949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb6770 (9): Bad file descriptor 00:21:45.424 [2024-11-20 08:34:32.682945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:45.424 [2024-11-20 08:34:32.682983] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:45.424 [2024-11-20 08:34:32.682995] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:45.424 [2024-11-20 08:34:32.683007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:45.424 request: 00:21:45.424 { 00:21:45.424 "name": "nvme0", 00:21:45.424 "trtype": "tcp", 00:21:45.424 "traddr": "127.0.0.1", 00:21:45.424 "adrfam": "ipv4", 00:21:45.424 "trsvcid": "4420", 00:21:45.424 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:45.424 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:45.424 "prchk_reftag": false, 00:21:45.424 "prchk_guard": false, 00:21:45.424 "hdgst": false, 00:21:45.424 "ddgst": false, 00:21:45.424 "psk": "key1", 00:21:45.424 "allow_unrecognized_csi": false, 00:21:45.424 "method": "bdev_nvme_attach_controller", 00:21:45.424 "req_id": 1 00:21:45.424 } 00:21:45.424 Got JSON-RPC error response 00:21:45.424 response: 00:21:45.424 { 00:21:45.424 "code": -5, 00:21:45.424 "message": "Input/output error" 00:21:45.424 } 00:21:45.424 08:34:32 keyring_file -- common/autotest_common.sh@658 -- # es=1 00:21:45.424 08:34:32 keyring_file -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:21:45.424 08:34:32 keyring_file -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:21:45.424 08:34:32 keyring_file -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:21:45.424 08:34:32 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:21:45.424 08:34:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:45.424 08:34:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:45.424 08:34:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:45.424 08:34:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:45.424 08:34:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:45.424 08:34:32 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:45.424 08:34:32 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:21:45.424 08:34:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:45.424 08:34:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:45.424 08:34:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:45.424 08:34:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:45.424 08:34:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:45.991 08:34:33 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:21:45.991 08:34:33 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:21:45.991 08:34:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:45.991 08:34:33 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:21:45.991 08:34:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:46.559 08:34:33 keyring_file -- keyring/file.sh@78 -- # jq length 00:21:46.559 08:34:33 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:21:46.559 08:34:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:46.818 08:34:34 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:21:46.818 08:34:34 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.0kHUWevv3x 00:21:46.818 08:34:34 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.0kHUWevv3x 00:21:46.818 08:34:34 keyring_file -- common/autotest_common.sh@655 -- # local es=0 00:21:46.818 08:34:34 keyring_file -- common/autotest_common.sh@657 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.0kHUWevv3x 00:21:46.818 08:34:34 keyring_file -- common/autotest_common.sh@643 -- # local arg=bperf_cmd 00:21:46.818 08:34:34 keyring_file -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:21:46.818 08:34:34 keyring_file -- common/autotest_common.sh@647 -- # type -t bperf_cmd 00:21:46.818 08:34:34 keyring_file -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:21:46.818 08:34:34 keyring_file -- common/autotest_common.sh@658 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0kHUWevv3x 00:21:46.818 08:34:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0kHUWevv3x 00:21:47.077 [2024-11-20 08:34:34.408188] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.0kHUWevv3x': 0100660 00:21:47.077 [2024-11-20 08:34:34.408238] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:47.077 request: 00:21:47.077 { 00:21:47.077 "name": "key0", 00:21:47.077 "path": "/tmp/tmp.0kHUWevv3x", 00:21:47.077 "method": "keyring_file_add_key", 00:21:47.077 "req_id": 1 00:21:47.077 } 00:21:47.077 Got JSON-RPC error response 00:21:47.077 response: 00:21:47.077 { 00:21:47.077 "code": -1, 00:21:47.077 "message": "Operation not permitted" 00:21:47.077 } 00:21:47.077 08:34:34 keyring_file -- common/autotest_common.sh@658 -- # es=1 00:21:47.077 08:34:34 keyring_file -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:21:47.077 08:34:34 keyring_file -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:21:47.077 08:34:34 keyring_file -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:21:47.077 08:34:34 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.0kHUWevv3x 00:21:47.077 08:34:34 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0kHUWevv3x 00:21:47.077 08:34:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0kHUWevv3x 00:21:47.336 08:34:34 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.0kHUWevv3x 00:21:47.336 08:34:34 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:21:47.336 08:34:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:47.336 08:34:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:47.336 08:34:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:47.336 08:34:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:47.336 08:34:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:47.595 08:34:35 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:21:47.595 08:34:35 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:47.595 08:34:35 keyring_file -- common/autotest_common.sh@655 -- # local es=0 00:21:47.595 08:34:35 keyring_file -- common/autotest_common.sh@657 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:47.595 08:34:35 keyring_file -- common/autotest_common.sh@643 -- # local arg=bperf_cmd 00:21:47.595 08:34:35 keyring_file -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:21:47.595 08:34:35 keyring_file -- common/autotest_common.sh@647 -- # type -t bperf_cmd 00:21:47.595 08:34:35 keyring_file -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:21:47.595 08:34:35 keyring_file -- common/autotest_common.sh@658 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:47.595 08:34:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:48.162 [2024-11-20 08:34:35.436120] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.0kHUWevv3x': No such file or directory 00:21:48.162 [2024-11-20 08:34:35.436368] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:48.162 [2024-11-20 08:34:35.436408] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:48.162 [2024-11-20 08:34:35.436431] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:21:48.162 [2024-11-20 08:34:35.436441] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:48.162 [2024-11-20 08:34:35.436450] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:48.162 request: 00:21:48.162 { 00:21:48.162 "name": "nvme0", 00:21:48.162 "trtype": "tcp", 00:21:48.162 "traddr": "127.0.0.1", 00:21:48.162 "adrfam": "ipv4", 00:21:48.162 "trsvcid": "4420", 00:21:48.162 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:48.162 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:48.162 "prchk_reftag": false, 00:21:48.162 "prchk_guard": false, 00:21:48.162 "hdgst": false, 00:21:48.162 "ddgst": false, 00:21:48.162 "psk": "key0", 00:21:48.162 "allow_unrecognized_csi": false, 00:21:48.162 "method": "bdev_nvme_attach_controller", 00:21:48.162 "req_id": 1 00:21:48.162 } 00:21:48.162 Got JSON-RPC error response 00:21:48.162 response: 00:21:48.162 { 00:21:48.162 "code": -19, 00:21:48.162 "message": "No such device" 00:21:48.162 } 00:21:48.162 08:34:35 keyring_file -- common/autotest_common.sh@658 -- # es=1 00:21:48.162 08:34:35 keyring_file -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:21:48.162 08:34:35 keyring_file -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:21:48.162 08:34:35 keyring_file -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:21:48.162 08:34:35 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:21:48.162 08:34:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:48.162 08:34:35 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:48.162 08:34:35 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:48.162 08:34:35 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:48.162 08:34:35 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:48.162 08:34:35 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:48.422 08:34:35 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:48.422 08:34:35 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vFW0Q1tRpT 00:21:48.422 08:34:35 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:48.422 08:34:35 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:48.422 08:34:35 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:48.422 08:34:35 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:48.422 08:34:35 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:48.422 08:34:35 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:48.422 08:34:35 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:48.422 08:34:35 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vFW0Q1tRpT 00:21:48.422 08:34:35 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vFW0Q1tRpT 00:21:48.422 08:34:35 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.vFW0Q1tRpT 00:21:48.422 08:34:35 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vFW0Q1tRpT 00:21:48.422 08:34:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vFW0Q1tRpT 00:21:48.681 08:34:36 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:48.681 08:34:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:49.248 nvme0n1 00:21:49.248 08:34:36 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:21:49.248 08:34:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:49.248 08:34:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:49.248 08:34:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:49.248 08:34:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:49.248 08:34:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:49.508 08:34:36 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:21:49.508 08:34:36 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:21:49.508 08:34:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:49.767 08:34:37 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:21:49.767 08:34:37 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:21:49.767 08:34:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:49.767 08:34:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:49.767 08:34:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:50.025 08:34:37 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:21:50.025 08:34:37 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:21:50.025 08:34:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:50.025 08:34:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:50.025 08:34:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:50.025 08:34:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:50.026 08:34:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:50.285 08:34:37 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:21:50.285 08:34:37 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:50.285 08:34:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:50.545 08:34:38 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:21:50.545 08:34:38 keyring_file -- keyring/file.sh@105 -- # jq length 00:21:50.545 08:34:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:51.114 08:34:38 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:21:51.114 08:34:38 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vFW0Q1tRpT 00:21:51.114 08:34:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vFW0Q1tRpT 00:21:51.373 08:34:38 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.zXAeWLPciK 00:21:51.373 08:34:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.zXAeWLPciK 00:21:51.632 08:34:38 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:51.632 08:34:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:51.890 nvme0n1 00:21:51.890 08:34:39 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:21:51.890 08:34:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:52.458 08:34:39 keyring_file -- keyring/file.sh@113 -- # config='{ 00:21:52.458 "subsystems": [ 00:21:52.458 { 00:21:52.458 "subsystem": "keyring", 00:21:52.458 "config": [ 00:21:52.458 { 00:21:52.458 "method": "keyring_file_add_key", 00:21:52.458 "params": { 00:21:52.458 "name": "key0", 00:21:52.458 "path": "/tmp/tmp.vFW0Q1tRpT" 00:21:52.458 } 00:21:52.458 }, 00:21:52.458 { 00:21:52.458 "method": "keyring_file_add_key", 00:21:52.458 "params": { 00:21:52.458 "name": "key1", 00:21:52.458 "path": "/tmp/tmp.zXAeWLPciK" 00:21:52.458 } 00:21:52.458 } 00:21:52.458 ] 00:21:52.458 }, 00:21:52.458 { 00:21:52.458 "subsystem": "iobuf", 00:21:52.458 "config": [ 00:21:52.458 { 00:21:52.458 "method": "iobuf_set_options", 00:21:52.458 "params": { 00:21:52.458 "small_pool_count": 8192, 00:21:52.458 "large_pool_count": 1024, 00:21:52.458 "small_bufsize": 8192, 00:21:52.458 "large_bufsize": 135168, 00:21:52.458 "enable_numa": false 00:21:52.458 } 00:21:52.458 } 00:21:52.458 ] 00:21:52.458 }, 00:21:52.458 { 00:21:52.458 "subsystem": "sock", 00:21:52.458 "config": [ 00:21:52.458 { 00:21:52.458 "method": "sock_set_default_impl", 00:21:52.458 "params": { 00:21:52.458 "impl_name": "uring" 00:21:52.458 } 00:21:52.458 }, 00:21:52.458 { 00:21:52.458 "method": "sock_impl_set_options", 00:21:52.458 "params": { 00:21:52.458 "impl_name": "ssl", 00:21:52.458 "recv_buf_size": 4096, 00:21:52.458 "send_buf_size": 4096, 00:21:52.458 "enable_recv_pipe": true, 00:21:52.458 "enable_quickack": false, 00:21:52.458 "enable_placement_id": 0, 00:21:52.458 "enable_zerocopy_send_server": true, 00:21:52.458 "enable_zerocopy_send_client": false, 00:21:52.458 "zerocopy_threshold": 0, 00:21:52.458 "tls_version": 0, 00:21:52.458 "enable_ktls": false 00:21:52.458 } 00:21:52.458 }, 00:21:52.458 { 00:21:52.458 "method": "sock_impl_set_options", 00:21:52.458 "params": { 00:21:52.458 "impl_name": "posix", 00:21:52.458 "recv_buf_size": 2097152, 00:21:52.458 "send_buf_size": 2097152, 00:21:52.458 "enable_recv_pipe": true, 00:21:52.458 "enable_quickack": false, 00:21:52.459 "enable_placement_id": 0, 00:21:52.459 "enable_zerocopy_send_server": true, 00:21:52.459 "enable_zerocopy_send_client": false, 00:21:52.459 "zerocopy_threshold": 0, 00:21:52.459 "tls_version": 0, 00:21:52.459 "enable_ktls": false 00:21:52.459 } 00:21:52.459 }, 00:21:52.459 { 00:21:52.459 "method": "sock_impl_set_options", 00:21:52.459 "params": { 00:21:52.459 "impl_name": "uring", 00:21:52.459 "recv_buf_size": 2097152, 00:21:52.459 "send_buf_size": 2097152, 00:21:52.459 "enable_recv_pipe": true, 00:21:52.459 "enable_quickack": false, 00:21:52.459 "enable_placement_id": 0, 00:21:52.459 "enable_zerocopy_send_server": false, 00:21:52.459 "enable_zerocopy_send_client": false, 00:21:52.459 "zerocopy_threshold": 0, 00:21:52.459 "tls_version": 0, 00:21:52.459 "enable_ktls": false 00:21:52.459 } 00:21:52.459 } 00:21:52.459 ] 00:21:52.459 }, 00:21:52.459 { 00:21:52.459 "subsystem": "vmd", 00:21:52.459 "config": [] 00:21:52.459 }, 00:21:52.459 { 00:21:52.459 "subsystem": "accel", 00:21:52.459 "config": [ 00:21:52.459 { 00:21:52.459 "method": "accel_set_options", 00:21:52.459 "params": { 00:21:52.459 "small_cache_size": 128, 00:21:52.459 "large_cache_size": 16, 00:21:52.459 "task_count": 2048, 00:21:52.459 "sequence_count": 2048, 00:21:52.459 "buf_count": 2048 00:21:52.459 } 00:21:52.459 } 00:21:52.459 ] 00:21:52.459 }, 00:21:52.459 { 00:21:52.459 "subsystem": "bdev", 00:21:52.459 "config": [ 00:21:52.459 { 00:21:52.459 "method": "bdev_set_options", 00:21:52.459 "params": { 00:21:52.459 "bdev_io_pool_size": 65535, 00:21:52.459 "bdev_io_cache_size": 256, 00:21:52.459 "bdev_auto_examine": true, 00:21:52.459 "iobuf_small_cache_size": 128, 00:21:52.459 "iobuf_large_cache_size": 16 00:21:52.459 } 00:21:52.459 }, 00:21:52.459 { 00:21:52.459 "method": "bdev_raid_set_options", 00:21:52.459 "params": { 00:21:52.459 "process_window_size_kb": 1024, 00:21:52.459 "process_max_bandwidth_mb_sec": 0 00:21:52.459 } 00:21:52.459 }, 00:21:52.459 { 00:21:52.459 "method": "bdev_iscsi_set_options", 00:21:52.459 "params": { 00:21:52.459 "timeout_sec": 30 00:21:52.459 } 00:21:52.459 }, 00:21:52.459 { 00:21:52.459 "method": "bdev_nvme_set_options", 00:21:52.459 "params": { 00:21:52.459 "action_on_timeout": "none", 00:21:52.459 "timeout_us": 0, 00:21:52.459 "timeout_admin_us": 0, 00:21:52.459 "keep_alive_timeout_ms": 10000, 00:21:52.459 "arbitration_burst": 0, 00:21:52.459 "low_priority_weight": 0, 00:21:52.459 "medium_priority_weight": 0, 00:21:52.459 "high_priority_weight": 0, 00:21:52.459 "nvme_adminq_poll_period_us": 10000, 00:21:52.459 "nvme_ioq_poll_period_us": 0, 00:21:52.459 "io_queue_requests": 512, 00:21:52.459 "delay_cmd_submit": true, 00:21:52.459 "transport_retry_count": 4, 00:21:52.459 "bdev_retry_count": 3, 00:21:52.459 "transport_ack_timeout": 0, 00:21:52.459 "ctrlr_loss_timeout_sec": 0, 00:21:52.459 "reconnect_delay_sec": 0, 00:21:52.459 "fast_io_fail_timeout_sec": 0, 00:21:52.459 "disable_auto_failback": false, 00:21:52.459 "generate_uuids": false, 00:21:52.459 "transport_tos": 0, 00:21:52.459 "nvme_error_stat": false, 00:21:52.459 "rdma_srq_size": 0, 00:21:52.459 "io_path_stat": false, 00:21:52.459 "allow_accel_sequence": false, 00:21:52.459 "rdma_max_cq_size": 0, 00:21:52.459 "rdma_cm_event_timeout_ms": 0, 00:21:52.459 "dhchap_digests": [ 00:21:52.459 "sha256", 00:21:52.459 "sha384", 00:21:52.459 "sha512" 00:21:52.459 ], 00:21:52.459 "dhchap_dhgroups": [ 00:21:52.459 "null", 00:21:52.459 "ffdhe2048", 00:21:52.459 "ffdhe3072", 00:21:52.459 "ffdhe4096", 00:21:52.459 "ffdhe6144", 00:21:52.459 "ffdhe8192" 00:21:52.459 ] 00:21:52.459 } 00:21:52.459 }, 00:21:52.459 { 00:21:52.459 "method": "bdev_nvme_attach_controller", 00:21:52.459 "params": { 00:21:52.459 "name": "nvme0", 00:21:52.459 "trtype": "TCP", 00:21:52.459 "adrfam": "IPv4", 00:21:52.459 "traddr": "127.0.0.1", 00:21:52.459 "trsvcid": "4420", 00:21:52.459 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:52.459 "prchk_reftag": false, 00:21:52.459 "prchk_guard": false, 00:21:52.459 "ctrlr_loss_timeout_sec": 0, 00:21:52.459 "reconnect_delay_sec": 0, 00:21:52.459 "fast_io_fail_timeout_sec": 0, 00:21:52.459 "psk": "key0", 00:21:52.459 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:52.459 "hdgst": false, 00:21:52.459 "ddgst": false, 00:21:52.459 "multipath": "multipath" 00:21:52.459 } 00:21:52.459 }, 00:21:52.459 { 00:21:52.459 "method": "bdev_nvme_set_hotplug", 00:21:52.459 "params": { 00:21:52.459 "period_us": 100000, 00:21:52.459 "enable": false 00:21:52.459 } 00:21:52.459 }, 00:21:52.459 { 00:21:52.459 "method": "bdev_wait_for_examine" 00:21:52.459 } 00:21:52.459 ] 00:21:52.459 }, 00:21:52.459 { 00:21:52.459 "subsystem": "nbd", 00:21:52.459 "config": [] 00:21:52.459 } 00:21:52.459 ] 00:21:52.459 }' 00:21:52.459 08:34:39 keyring_file -- keyring/file.sh@115 -- # killprocess 85416 00:21:52.459 08:34:39 keyring_file -- common/autotest_common.sh@957 -- # '[' -z 85416 ']' 00:21:52.459 08:34:39 keyring_file -- common/autotest_common.sh@961 -- # kill -0 85416 00:21:52.459 08:34:39 keyring_file -- common/autotest_common.sh@962 -- # uname 00:21:52.459 08:34:39 keyring_file -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:21:52.459 08:34:39 keyring_file -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 85416 00:21:52.459 killing process with pid 85416 00:21:52.459 Received shutdown signal, test time was about 1.000000 seconds 00:21:52.459 00:21:52.459 Latency(us) 00:21:52.459 [2024-11-20T08:34:40.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.459 [2024-11-20T08:34:40.020Z] =================================================================================================================== 00:21:52.459 [2024-11-20T08:34:40.020Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:52.459 08:34:39 keyring_file -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:21:52.459 08:34:39 keyring_file -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:21:52.459 08:34:39 keyring_file -- common/autotest_common.sh@975 -- # echo 'killing process with pid 85416' 00:21:52.459 08:34:39 keyring_file -- common/autotest_common.sh@976 -- # kill 85416 00:21:52.459 08:34:39 keyring_file -- common/autotest_common.sh@981 -- # wait 85416 00:21:52.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:52.718 08:34:40 keyring_file -- keyring/file.sh@118 -- # bperfpid=85679 00:21:52.718 08:34:40 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85679 /var/tmp/bperf.sock 00:21:52.718 08:34:40 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:52.719 08:34:40 keyring_file -- common/autotest_common.sh@838 -- # '[' -z 85679 ']' 00:21:52.719 08:34:40 keyring_file -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:52.719 08:34:40 keyring_file -- common/autotest_common.sh@843 -- # local max_retries=100 00:21:52.719 08:34:40 keyring_file -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:52.719 08:34:40 keyring_file -- common/autotest_common.sh@847 -- # xtrace_disable 00:21:52.719 08:34:40 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:21:52.719 "subsystems": [ 00:21:52.719 { 00:21:52.719 "subsystem": "keyring", 00:21:52.719 "config": [ 00:21:52.719 { 00:21:52.719 "method": "keyring_file_add_key", 00:21:52.719 "params": { 00:21:52.719 "name": "key0", 00:21:52.719 "path": "/tmp/tmp.vFW0Q1tRpT" 00:21:52.719 } 00:21:52.719 }, 00:21:52.719 { 00:21:52.719 "method": "keyring_file_add_key", 00:21:52.719 "params": { 00:21:52.719 "name": "key1", 00:21:52.719 "path": "/tmp/tmp.zXAeWLPciK" 00:21:52.719 } 00:21:52.719 } 00:21:52.719 ] 00:21:52.719 }, 00:21:52.719 { 00:21:52.719 "subsystem": "iobuf", 00:21:52.719 "config": [ 00:21:52.719 { 00:21:52.719 "method": "iobuf_set_options", 00:21:52.719 "params": { 00:21:52.719 "small_pool_count": 8192, 00:21:52.719 "large_pool_count": 1024, 00:21:52.719 "small_bufsize": 8192, 00:21:52.719 "large_bufsize": 135168, 00:21:52.719 "enable_numa": false 00:21:52.719 } 00:21:52.719 } 00:21:52.719 ] 00:21:52.719 }, 00:21:52.719 { 00:21:52.719 "subsystem": "sock", 00:21:52.719 "config": [ 00:21:52.719 { 00:21:52.719 "method": "sock_set_default_impl", 00:21:52.719 "params": { 00:21:52.719 "impl_name": "uring" 00:21:52.719 } 00:21:52.719 }, 00:21:52.719 { 00:21:52.719 "method": "sock_impl_set_options", 00:21:52.719 "params": { 00:21:52.719 "impl_name": "ssl", 00:21:52.719 "recv_buf_size": 4096, 00:21:52.719 "send_buf_size": 4096, 00:21:52.719 "enable_recv_pipe": true, 00:21:52.719 "enable_quickack": false, 00:21:52.719 "enable_placement_id": 0, 00:21:52.719 "enable_zerocopy_send_server": true, 00:21:52.719 "enable_zerocopy_send_client": false, 00:21:52.719 "zerocopy_threshold": 0, 00:21:52.719 "tls_version": 0, 00:21:52.719 "enable_ktls": false 00:21:52.719 } 00:21:52.719 }, 00:21:52.719 { 00:21:52.719 "method": "sock_impl_set_options", 00:21:52.719 "params": { 00:21:52.719 "impl_name": "posix", 00:21:52.719 "recv_buf_size": 2097152, 00:21:52.719 "send_buf_size": 2097152, 00:21:52.719 "enable_recv_pipe": true, 00:21:52.719 "enable_quickack": false, 00:21:52.719 "enable_placement_id": 0, 00:21:52.719 "enable_zerocopy_send_server": true, 00:21:52.719 "enable_zerocopy_send_client": false, 00:21:52.719 "zerocopy_threshold": 0, 00:21:52.719 "tls_version": 0, 00:21:52.719 "enable_ktls": false 00:21:52.719 } 00:21:52.719 }, 00:21:52.719 { 00:21:52.719 "method": "sock_impl_set_options", 00:21:52.719 "params": { 00:21:52.719 "impl_name": "uring", 00:21:52.719 "recv_buf_size": 2097152, 00:21:52.719 "send_buf_size": 2097152, 00:21:52.719 "enable_recv_pipe": true, 00:21:52.719 "enable_quickack": false, 00:21:52.719 "enable_placement_id": 0, 00:21:52.719 "enable_zerocopy_send_server": false, 00:21:52.719 "enable_zerocopy_send_client": false, 00:21:52.719 "zerocopy_threshold": 0, 00:21:52.719 "tls_version": 0, 00:21:52.719 "enable_ktls": false 00:21:52.719 } 00:21:52.719 } 00:21:52.719 ] 00:21:52.719 }, 00:21:52.719 { 00:21:52.719 "subsystem": "vmd", 00:21:52.719 "config": [] 00:21:52.719 }, 00:21:52.719 { 00:21:52.719 "subsystem": "accel", 00:21:52.719 "config": [ 00:21:52.719 { 00:21:52.719 "method": "accel_set_options", 00:21:52.719 "params": { 00:21:52.719 "small_cache_size": 128, 00:21:52.719 "large_cache_size": 16, 00:21:52.719 "task_count": 2048, 00:21:52.719 "sequence_count": 2048, 00:21:52.719 "buf_count": 2048 00:21:52.719 } 00:21:52.719 } 00:21:52.719 ] 00:21:52.719 }, 00:21:52.719 { 00:21:52.719 "subsystem": "bdev", 00:21:52.719 "config": [ 00:21:52.719 { 00:21:52.719 "method": "bdev_set_options", 00:21:52.719 "params": { 00:21:52.719 "bdev_io_pool_size": 65535, 00:21:52.719 "bdev_io_cache_size": 256, 00:21:52.719 "bdev_auto_examine": true, 00:21:52.719 "iobuf_small_cache_size": 128, 00:21:52.719 "iobuf_large_cache_size": 16 00:21:52.719 } 00:21:52.719 }, 00:21:52.719 { 00:21:52.719 "method": "bdev_raid_set_options", 00:21:52.719 "params": { 00:21:52.719 "process_window_size_kb": 1024, 00:21:52.719 "process_max_bandwidth_mb_sec": 0 00:21:52.719 } 00:21:52.719 }, 00:21:52.719 { 00:21:52.719 "method": "bdev_iscsi_set_options", 00:21:52.719 "params": { 00:21:52.719 "timeout_sec": 30 00:21:52.719 } 00:21:52.719 }, 00:21:52.719 { 00:21:52.719 "method": "bdev_nvme_set_options", 00:21:52.719 "params": { 00:21:52.719 "action_on_timeout": "none", 00:21:52.719 "timeout_us": 0, 00:21:52.719 "timeout_admin_us": 0, 00:21:52.719 "keep_alive_timeout_ms": 10000, 00:21:52.719 "arbitration_burst": 0, 00:21:52.719 "low_priority_weight": 0, 00:21:52.719 "medium_priority_weight": 0, 00:21:52.719 "high_priority_weight": 0, 00:21:52.719 "nvme_adminq_poll_period_us": 10000, 00:21:52.719 "nvme_ioq_poll_period_us": 0, 00:21:52.719 "io_queue_requests": 512, 00:21:52.719 "delay_cmd_submit": true, 00:21:52.719 "transport_retry_count": 4, 00:21:52.719 "bdev_retry_count": 3, 00:21:52.719 "transport_ack_timeout": 0, 00:21:52.719 "ctrlr_loss_timeout_sec": 0, 00:21:52.719 "reconnect_delay_sec": 0, 00:21:52.719 "fast_io_fail_timeout_sec": 0, 00:21:52.719 "disable_auto_failback": false, 00:21:52.719 "generate_uuids": false, 00:21:52.719 "transport_tos": 0, 00:21:52.719 "nvme_error_stat": false, 00:21:52.719 "rdma_srq_size": 0, 00:21:52.719 "io_path_stat": false, 00:21:52.719 "allow_accel_sequence": false, 00:21:52.719 "rdma_max_cq_size": 0, 00:21:52.719 "rdma_cm_event_timeout_ms": 0, 00:21:52.719 "dhchap_digests": [ 00:21:52.719 "sha256", 00:21:52.719 "sha384", 00:21:52.719 "sha512" 00:21:52.719 ], 00:21:52.719 "dhchap_dhgroups": [ 00:21:52.719 "null", 00:21:52.719 "ffdhe2048", 00:21:52.719 "ffdhe3072", 00:21:52.719 "ffdhe4096", 00:21:52.719 "ffdhe6144", 00:21:52.719 "ffdhe8192" 00:21:52.719 ] 00:21:52.719 } 00:21:52.719 }, 00:21:52.719 { 00:21:52.719 "method": "bdev_nvme_attach_controller", 00:21:52.719 "params": { 00:21:52.719 "name": "nvme0", 00:21:52.719 "trtype": "TCP", 00:21:52.719 "adrfam": "IPv4", 00:21:52.719 "traddr": "127.0.0.1", 00:21:52.719 "trsvcid": "4420", 00:21:52.719 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:52.719 "prchk_reftag": false, 00:21:52.719 "prchk_guard": false, 00:21:52.719 "ctrlr_loss_timeout_sec": 0, 00:21:52.719 "reconnect_delay_sec": 0, 00:21:52.719 "fast_io_fail_timeout_sec": 0, 00:21:52.719 "psk": "key0", 00:21:52.719 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:52.719 "hdgst": false, 00:21:52.719 "ddgst": false, 00:21:52.719 "multipath": "multipath" 00:21:52.719 } 00:21:52.720 }, 00:21:52.720 { 00:21:52.720 "method": "bdev_nvme_set_hotplug", 00:21:52.720 "params": { 00:21:52.720 "period_us": 100000, 00:21:52.720 "enable": false 00:21:52.720 } 00:21:52.720 }, 00:21:52.720 { 00:21:52.720 "method": "bdev_wait_for_examine" 00:21:52.720 } 00:21:52.720 ] 00:21:52.720 }, 00:21:52.720 { 00:21:52.720 "subsystem": "nbd", 00:21:52.720 "config": [] 00:21:52.720 } 00:21:52.720 ] 00:21:52.720 }' 00:21:52.720 08:34:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:52.720 [2024-11-20 08:34:40.090930] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:21:52.720 [2024-11-20 08:34:40.091165] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85679 ] 00:21:52.720 [2024-11-20 08:34:40.231925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.979 [2024-11-20 08:34:40.297022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.979 [2024-11-20 08:34:40.440328] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:52.979 [2024-11-20 08:34:40.506776] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:53.916 08:34:41 keyring_file -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:21:53.916 08:34:41 keyring_file -- common/autotest_common.sh@871 -- # return 0 00:21:53.916 08:34:41 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:21:53.916 08:34:41 keyring_file -- keyring/file.sh@121 -- # jq length 00:21:53.916 08:34:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:53.916 08:34:41 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:53.916 08:34:41 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:21:53.916 08:34:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:53.916 08:34:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:53.916 08:34:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:53.916 08:34:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:53.916 08:34:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:54.485 08:34:41 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:21:54.485 08:34:41 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:21:54.485 08:34:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:54.485 08:34:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:54.485 08:34:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:54.485 08:34:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:54.485 08:34:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:54.744 08:34:42 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:21:54.744 08:34:42 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:21:54.744 08:34:42 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:21:54.744 08:34:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:55.003 08:34:42 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:21:55.003 08:34:42 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:55.003 08:34:42 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.vFW0Q1tRpT /tmp/tmp.zXAeWLPciK 00:21:55.003 08:34:42 keyring_file -- keyring/file.sh@20 -- # killprocess 85679 00:21:55.003 08:34:42 keyring_file -- common/autotest_common.sh@957 -- # '[' -z 85679 ']' 00:21:55.003 08:34:42 keyring_file -- common/autotest_common.sh@961 -- # kill -0 85679 00:21:55.003 08:34:42 keyring_file -- common/autotest_common.sh@962 -- # uname 00:21:55.003 08:34:42 keyring_file -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:21:55.003 08:34:42 keyring_file -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 85679 00:21:55.003 08:34:42 keyring_file -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:21:55.003 08:34:42 keyring_file -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:21:55.003 08:34:42 keyring_file -- common/autotest_common.sh@975 -- # echo 'killing process with pid 85679' 00:21:55.003 killing process with pid 85679 00:21:55.003 Received shutdown signal, test time was about 1.000000 seconds 00:21:55.003 00:21:55.003 Latency(us) 00:21:55.003 [2024-11-20T08:34:42.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.003 [2024-11-20T08:34:42.564Z] =================================================================================================================== 00:21:55.003 [2024-11-20T08:34:42.564Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:55.003 08:34:42 keyring_file -- common/autotest_common.sh@976 -- # kill 85679 00:21:55.003 08:34:42 keyring_file -- common/autotest_common.sh@981 -- # wait 85679 00:21:55.262 08:34:42 keyring_file -- keyring/file.sh@21 -- # killprocess 85400 00:21:55.262 08:34:42 keyring_file -- common/autotest_common.sh@957 -- # '[' -z 85400 ']' 00:21:55.262 08:34:42 keyring_file -- common/autotest_common.sh@961 -- # kill -0 85400 00:21:55.262 08:34:42 keyring_file -- common/autotest_common.sh@962 -- # uname 00:21:55.262 08:34:42 keyring_file -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:21:55.262 08:34:42 keyring_file -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 85400 00:21:55.262 killing process with pid 85400 00:21:55.262 08:34:42 keyring_file -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:21:55.262 08:34:42 keyring_file -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:21:55.262 08:34:42 keyring_file -- common/autotest_common.sh@975 -- # echo 'killing process with pid 85400' 00:21:55.262 08:34:42 keyring_file -- common/autotest_common.sh@976 -- # kill 85400 00:21:55.262 08:34:42 keyring_file -- common/autotest_common.sh@981 -- # wait 85400 00:21:55.521 ************************************ 00:21:55.521 END TEST keyring_file 00:21:55.521 ************************************ 00:21:55.521 00:21:55.521 real 0m17.890s 00:21:55.521 user 0m42.742s 00:21:55.521 sys 0m3.210s 00:21:55.521 08:34:42 keyring_file -- common/autotest_common.sh@1133 -- # xtrace_disable 00:21:55.521 08:34:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:55.521 08:34:43 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:21:55.521 08:34:43 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:55.521 08:34:43 -- common/autotest_common.sh@1108 -- # '[' 3 -le 1 ']' 00:21:55.521 08:34:43 -- common/autotest_common.sh@1114 -- # xtrace_disable 00:21:55.521 08:34:43 -- common/autotest_common.sh@10 -- # set +x 00:21:55.521 ************************************ 00:21:55.521 START TEST keyring_linux 00:21:55.521 ************************************ 00:21:55.521 08:34:43 keyring_linux -- common/autotest_common.sh@1132 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:55.521 Joined session keyring: 276904671 00:21:55.781 * Looking for test storage... 00:21:55.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:55.781 08:34:43 keyring_linux -- common/autotest_common.sh@1637 -- # [[ y == y ]] 00:21:55.781 08:34:43 keyring_linux -- common/autotest_common.sh@1638 -- # lcov --version 00:21:55.781 08:34:43 keyring_linux -- common/autotest_common.sh@1638 -- # awk '{print $NF}' 00:21:55.781 08:34:43 keyring_linux -- common/autotest_common.sh@1638 -- # lt 1.15 2 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@345 -- # : 1 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@368 -- # return 0 00:21:55.781 08:34:43 keyring_linux -- common/autotest_common.sh@1639 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:55.781 08:34:43 keyring_linux -- common/autotest_common.sh@1651 -- # export 'LCOV_OPTS= 00:21:55.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.781 --rc genhtml_branch_coverage=1 00:21:55.781 --rc genhtml_function_coverage=1 00:21:55.781 --rc genhtml_legend=1 00:21:55.781 --rc geninfo_all_blocks=1 00:21:55.781 --rc geninfo_unexecuted_blocks=1 00:21:55.781 00:21:55.781 ' 00:21:55.781 08:34:43 keyring_linux -- common/autotest_common.sh@1651 -- # LCOV_OPTS=' 00:21:55.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.781 --rc genhtml_branch_coverage=1 00:21:55.781 --rc genhtml_function_coverage=1 00:21:55.781 --rc genhtml_legend=1 00:21:55.781 --rc geninfo_all_blocks=1 00:21:55.781 --rc geninfo_unexecuted_blocks=1 00:21:55.781 00:21:55.781 ' 00:21:55.781 08:34:43 keyring_linux -- common/autotest_common.sh@1652 -- # export 'LCOV=lcov 00:21:55.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.781 --rc genhtml_branch_coverage=1 00:21:55.781 --rc genhtml_function_coverage=1 00:21:55.781 --rc genhtml_legend=1 00:21:55.781 --rc geninfo_all_blocks=1 00:21:55.781 --rc geninfo_unexecuted_blocks=1 00:21:55.781 00:21:55.781 ' 00:21:55.781 08:34:43 keyring_linux -- common/autotest_common.sh@1652 -- # LCOV='lcov 00:21:55.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.781 --rc genhtml_branch_coverage=1 00:21:55.781 --rc genhtml_function_coverage=1 00:21:55.781 --rc genhtml_legend=1 00:21:55.781 --rc geninfo_all_blocks=1 00:21:55.781 --rc geninfo_unexecuted_blocks=1 00:21:55.781 00:21:55.781 ' 00:21:55.781 08:34:43 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:55.781 08:34:43 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:55.781 08:34:43 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:21:55.781 08:34:43 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.781 08:34:43 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.781 08:34:43 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.781 08:34:43 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.781 08:34:43 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.781 08:34:43 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.781 08:34:43 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.781 08:34:43 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.781 08:34:43 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.781 08:34:43 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.781 08:34:43 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:21:55.781 08:34:43 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=3c963c17-7e7f-4dbf-a8c5-d0b1ce2e58e4 00:21:55.781 08:34:43 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.781 08:34:43 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.781 08:34:43 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:55.781 08:34:43 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.781 08:34:43 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.781 08:34:43 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.782 08:34:43 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.782 08:34:43 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.782 08:34:43 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.782 08:34:43 keyring_linux -- paths/export.sh@5 -- # export PATH 00:21:55.782 08:34:43 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.782 08:34:43 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:21:55.782 08:34:43 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:55.782 08:34:43 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:55.782 08:34:43 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.782 08:34:43 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.782 08:34:43 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.782 08:34:43 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:55.782 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:55.782 08:34:43 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:55.782 08:34:43 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:55.782 08:34:43 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:55.782 08:34:43 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:55.782 08:34:43 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:55.782 08:34:43 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:55.782 08:34:43 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:21:55.782 08:34:43 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:21:55.782 08:34:43 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:21:55.782 08:34:43 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:21:55.782 08:34:43 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:55.782 08:34:43 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:21:55.782 08:34:43 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:55.782 08:34:43 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:55.782 08:34:43 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:21:55.782 08:34:43 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:55.782 08:34:43 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:55.782 08:34:43 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:55.782 08:34:43 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:55.782 08:34:43 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:55.782 08:34:43 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:55.782 08:34:43 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:55.782 08:34:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:21:55.782 /tmp/:spdk-test:key0 00:21:55.782 08:34:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:21:55.782 08:34:43 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:21:55.782 08:34:43 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:55.782 08:34:43 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:21:55.782 08:34:43 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:55.782 08:34:43 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:55.782 08:34:43 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:21:55.782 08:34:43 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:55.782 08:34:43 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:55.782 08:34:43 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:55.782 08:34:43 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:55.782 08:34:43 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:21:55.782 08:34:43 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:55.782 08:34:43 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:56.041 08:34:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:21:56.041 /tmp/:spdk-test:key1 00:21:56.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.041 08:34:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:21:56.041 08:34:43 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85812 00:21:56.041 08:34:43 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:56.041 08:34:43 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85812 00:21:56.041 08:34:43 keyring_linux -- common/autotest_common.sh@838 -- # '[' -z 85812 ']' 00:21:56.041 08:34:43 keyring_linux -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.041 08:34:43 keyring_linux -- common/autotest_common.sh@843 -- # local max_retries=100 00:21:56.041 08:34:43 keyring_linux -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.041 08:34:43 keyring_linux -- common/autotest_common.sh@847 -- # xtrace_disable 00:21:56.041 08:34:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:56.041 [2024-11-20 08:34:43.450050] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:21:56.041 [2024-11-20 08:34:43.450274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85812 ] 00:21:56.041 [2024-11-20 08:34:43.599679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.300 [2024-11-20 08:34:43.661503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.300 [2024-11-20 08:34:43.741548] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:56.559 08:34:43 keyring_linux -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:21:56.559 08:34:43 keyring_linux -- common/autotest_common.sh@871 -- # return 0 00:21:56.559 08:34:43 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:21:56.559 08:34:43 keyring_linux -- common/autotest_common.sh@566 -- # xtrace_disable 00:21:56.559 08:34:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:56.559 [2024-11-20 08:34:43.961277] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.559 null0 00:21:56.559 [2024-11-20 08:34:43.993257] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:56.559 [2024-11-20 08:34:43.993572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:56.559 08:34:44 keyring_linux -- common/autotest_common.sh@594 -- # [[ 0 == 0 ]] 00:21:56.559 08:34:44 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:21:56.559 657049253 00:21:56.559 08:34:44 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:21:56.559 667439064 00:21:56.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:56.559 08:34:44 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85827 00:21:56.559 08:34:44 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85827 /var/tmp/bperf.sock 00:21:56.559 08:34:44 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:21:56.559 08:34:44 keyring_linux -- common/autotest_common.sh@838 -- # '[' -z 85827 ']' 00:21:56.559 08:34:44 keyring_linux -- common/autotest_common.sh@842 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:56.559 08:34:44 keyring_linux -- common/autotest_common.sh@843 -- # local max_retries=100 00:21:56.559 08:34:44 keyring_linux -- common/autotest_common.sh@845 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:56.559 08:34:44 keyring_linux -- common/autotest_common.sh@847 -- # xtrace_disable 00:21:56.559 08:34:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:56.559 [2024-11-20 08:34:44.066734] Starting SPDK v25.01-pre git sha1 717acfa62 / DPDK 24.03.0 initialization... 00:21:56.559 [2024-11-20 08:34:44.066990] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85827 ] 00:21:56.818 [2024-11-20 08:34:44.211447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.818 [2024-11-20 08:34:44.260580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.818 08:34:44 keyring_linux -- common/autotest_common.sh@867 -- # (( i == 0 )) 00:21:56.818 08:34:44 keyring_linux -- common/autotest_common.sh@871 -- # return 0 00:21:56.818 08:34:44 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:21:56.818 08:34:44 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:21:57.077 08:34:44 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:21:57.077 08:34:44 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:57.646 [2024-11-20 08:34:44.916905] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:57.646 08:34:44 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:57.646 08:34:44 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:57.906 [2024-11-20 08:34:45.245017] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:57.906 nvme0n1 00:21:57.906 08:34:45 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:21:57.906 08:34:45 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:21:57.906 08:34:45 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:57.906 08:34:45 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:57.906 08:34:45 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:57.906 08:34:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:58.165 08:34:45 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:21:58.165 08:34:45 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:58.165 08:34:45 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:21:58.165 08:34:45 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:21:58.165 08:34:45 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:58.165 08:34:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:58.165 08:34:45 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:21:58.425 08:34:45 keyring_linux -- keyring/linux.sh@25 -- # sn=657049253 00:21:58.425 08:34:45 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:21:58.425 08:34:45 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:58.425 08:34:45 keyring_linux -- keyring/linux.sh@26 -- # [[ 657049253 == \6\5\7\0\4\9\2\5\3 ]] 00:21:58.425 08:34:45 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 657049253 00:21:58.425 08:34:45 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:21:58.425 08:34:45 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:58.685 Running I/O for 1 seconds... 00:21:59.684 13345.00 IOPS, 52.13 MiB/s 00:21:59.684 Latency(us) 00:21:59.684 [2024-11-20T08:34:47.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.684 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:59.684 nvme0n1 : 1.01 13350.45 52.15 0.00 0.00 9537.95 2889.54 12571.00 00:21:59.684 [2024-11-20T08:34:47.245Z] =================================================================================================================== 00:21:59.684 [2024-11-20T08:34:47.245Z] Total : 13350.45 52.15 0.00 0.00 9537.95 2889.54 12571.00 00:21:59.684 { 00:21:59.684 "results": [ 00:21:59.684 { 00:21:59.684 "job": "nvme0n1", 00:21:59.684 "core_mask": "0x2", 00:21:59.684 "workload": "randread", 00:21:59.684 "status": "finished", 00:21:59.684 "queue_depth": 128, 00:21:59.684 "io_size": 4096, 00:21:59.684 "runtime": 1.009254, 00:21:59.684 "iops": 13350.454890443832, 00:21:59.684 "mibps": 52.15021441579622, 00:21:59.684 "io_failed": 0, 00:21:59.684 "io_timeout": 0, 00:21:59.684 "avg_latency_us": 9537.945588945713, 00:21:59.684 "min_latency_us": 2889.541818181818, 00:21:59.684 "max_latency_us": 12570.996363636363 00:21:59.684 } 00:21:59.684 ], 00:21:59.684 "core_count": 1 00:21:59.684 } 00:21:59.685 08:34:47 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:59.685 08:34:47 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:59.943 08:34:47 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:21:59.943 08:34:47 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:21:59.943 08:34:47 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:59.943 08:34:47 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:59.943 08:34:47 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:59.943 08:34:47 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:00.203 08:34:47 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:22:00.203 08:34:47 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:00.203 08:34:47 keyring_linux -- keyring/linux.sh@23 -- # return 00:22:00.203 08:34:47 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:00.203 08:34:47 keyring_linux -- common/autotest_common.sh@655 -- # local es=0 00:22:00.203 08:34:47 keyring_linux -- common/autotest_common.sh@657 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:00.203 08:34:47 keyring_linux -- common/autotest_common.sh@643 -- # local arg=bperf_cmd 00:22:00.203 08:34:47 keyring_linux -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:22:00.203 08:34:47 keyring_linux -- common/autotest_common.sh@647 -- # type -t bperf_cmd 00:22:00.203 08:34:47 keyring_linux -- common/autotest_common.sh@647 -- # case "$(type -t "$arg")" in 00:22:00.203 08:34:47 keyring_linux -- common/autotest_common.sh@658 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:00.203 08:34:47 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:00.462 [2024-11-20 08:34:47.957288] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:00.462 [2024-11-20 08:34:47.957987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14045d0 (107): Transport endpoint is not connected 00:22:00.462 [2024-11-20 08:34:47.958976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14045d0 (9): Bad file descriptor 00:22:00.462 [2024-11-20 08:34:47.959973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:22:00.462 [2024-11-20 08:34:47.960146] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:00.462 [2024-11-20 08:34:47.960168] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:22:00.462 [2024-11-20 08:34:47.960186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:22:00.462 request: 00:22:00.462 { 00:22:00.462 "name": "nvme0", 00:22:00.462 "trtype": "tcp", 00:22:00.462 "traddr": "127.0.0.1", 00:22:00.462 "adrfam": "ipv4", 00:22:00.463 "trsvcid": "4420", 00:22:00.463 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:00.463 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:00.463 "prchk_reftag": false, 00:22:00.463 "prchk_guard": false, 00:22:00.463 "hdgst": false, 00:22:00.463 "ddgst": false, 00:22:00.463 "psk": ":spdk-test:key1", 00:22:00.463 "allow_unrecognized_csi": false, 00:22:00.463 "method": "bdev_nvme_attach_controller", 00:22:00.463 "req_id": 1 00:22:00.463 } 00:22:00.463 Got JSON-RPC error response 00:22:00.463 response: 00:22:00.463 { 00:22:00.463 "code": -5, 00:22:00.463 "message": "Input/output error" 00:22:00.463 } 00:22:00.463 08:34:47 keyring_linux -- common/autotest_common.sh@658 -- # es=1 00:22:00.463 08:34:47 keyring_linux -- common/autotest_common.sh@666 -- # (( es > 128 )) 00:22:00.463 08:34:47 keyring_linux -- common/autotest_common.sh@677 -- # [[ -n '' ]] 00:22:00.463 08:34:47 keyring_linux -- common/autotest_common.sh@682 -- # (( !es == 0 )) 00:22:00.463 08:34:47 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:22:00.463 08:34:47 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:00.463 08:34:47 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:22:00.463 08:34:47 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:22:00.463 08:34:47 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:22:00.463 08:34:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:00.463 08:34:47 keyring_linux -- keyring/linux.sh@33 -- # sn=657049253 00:22:00.463 08:34:47 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 657049253 00:22:00.463 1 links removed 00:22:00.463 08:34:47 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:00.463 08:34:47 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:22:00.463 08:34:47 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:22:00.463 08:34:47 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:22:00.463 08:34:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:22:00.463 08:34:47 keyring_linux -- keyring/linux.sh@33 -- # sn=667439064 00:22:00.463 08:34:47 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 667439064 00:22:00.463 1 links removed 00:22:00.463 08:34:47 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85827 00:22:00.463 08:34:47 keyring_linux -- common/autotest_common.sh@957 -- # '[' -z 85827 ']' 00:22:00.463 08:34:47 keyring_linux -- common/autotest_common.sh@961 -- # kill -0 85827 00:22:00.463 08:34:47 keyring_linux -- common/autotest_common.sh@962 -- # uname 00:22:00.463 08:34:48 keyring_linux -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:22:00.463 08:34:48 keyring_linux -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 85827 00:22:00.722 08:34:48 keyring_linux -- common/autotest_common.sh@963 -- # process_name=reactor_1 00:22:00.722 08:34:48 keyring_linux -- common/autotest_common.sh@967 -- # '[' reactor_1 = sudo ']' 00:22:00.722 killing process with pid 85827 00:22:00.722 08:34:48 keyring_linux -- common/autotest_common.sh@975 -- # echo 'killing process with pid 85827' 00:22:00.722 08:34:48 keyring_linux -- common/autotest_common.sh@976 -- # kill 85827 00:22:00.722 Received shutdown signal, test time was about 1.000000 seconds 00:22:00.722 00:22:00.722 Latency(us) 00:22:00.722 [2024-11-20T08:34:48.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.722 [2024-11-20T08:34:48.283Z] =================================================================================================================== 00:22:00.722 [2024-11-20T08:34:48.283Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:00.722 08:34:48 keyring_linux -- common/autotest_common.sh@981 -- # wait 85827 00:22:00.722 08:34:48 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85812 00:22:00.722 08:34:48 keyring_linux -- common/autotest_common.sh@957 -- # '[' -z 85812 ']' 00:22:00.722 08:34:48 keyring_linux -- common/autotest_common.sh@961 -- # kill -0 85812 00:22:00.722 08:34:48 keyring_linux -- common/autotest_common.sh@962 -- # uname 00:22:00.722 08:34:48 keyring_linux -- common/autotest_common.sh@962 -- # '[' Linux = Linux ']' 00:22:00.722 08:34:48 keyring_linux -- common/autotest_common.sh@963 -- # ps --no-headers -o comm= 85812 00:22:00.722 08:34:48 keyring_linux -- common/autotest_common.sh@963 -- # process_name=reactor_0 00:22:00.722 08:34:48 keyring_linux -- common/autotest_common.sh@967 -- # '[' reactor_0 = sudo ']' 00:22:00.722 killing process with pid 85812 00:22:00.722 08:34:48 keyring_linux -- common/autotest_common.sh@975 -- # echo 'killing process with pid 85812' 00:22:00.722 08:34:48 keyring_linux -- common/autotest_common.sh@976 -- # kill 85812 00:22:00.722 08:34:48 keyring_linux -- common/autotest_common.sh@981 -- # wait 85812 00:22:01.290 00:22:01.290 real 0m5.602s 00:22:01.290 user 0m11.065s 00:22:01.290 sys 0m1.548s 00:22:01.290 08:34:48 keyring_linux -- common/autotest_common.sh@1133 -- # xtrace_disable 00:22:01.290 08:34:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:01.290 ************************************ 00:22:01.290 END TEST keyring_linux 00:22:01.290 ************************************ 00:22:01.290 08:34:48 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:01.290 08:34:48 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:01.290 08:34:48 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:22:01.290 08:34:48 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:22:01.290 08:34:48 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:22:01.290 08:34:48 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:01.290 08:34:48 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:22:01.290 08:34:48 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:22:01.290 08:34:48 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:22:01.290 08:34:48 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:22:01.290 08:34:48 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:22:01.290 08:34:48 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:22:01.290 08:34:48 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:22:01.290 08:34:48 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:22:01.290 08:34:48 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:22:01.290 08:34:48 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:22:01.290 08:34:48 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:22:01.290 08:34:48 -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:01.290 08:34:48 -- common/autotest_common.sh@10 -- # set +x 00:22:01.290 08:34:48 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:22:01.290 08:34:48 -- common/autotest_common.sh@1384 -- # local autotest_es=0 00:22:01.290 08:34:48 -- common/autotest_common.sh@1385 -- # xtrace_disable 00:22:01.290 08:34:48 -- common/autotest_common.sh@10 -- # set +x 00:22:03.195 INFO: APP EXITING 00:22:03.195 INFO: killing all VMs 00:22:03.195 INFO: killing vhost app 00:22:03.195 INFO: EXIT DONE 00:22:03.763 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:03.763 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:03.763 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:04.331 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:04.331 Cleaning 00:22:04.331 Removing: /var/run/dpdk/spdk0/config 00:22:04.331 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:04.331 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:04.331 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:04.331 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:04.331 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:04.331 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:04.331 Removing: /var/run/dpdk/spdk1/config 00:22:04.331 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:04.590 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:04.590 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:04.590 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:04.590 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:04.590 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:04.590 Removing: /var/run/dpdk/spdk2/config 00:22:04.590 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:04.590 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:04.590 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:04.590 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:04.590 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:04.590 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:04.590 Removing: /var/run/dpdk/spdk3/config 00:22:04.590 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:04.590 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:04.590 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:04.590 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:04.590 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:04.590 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:04.590 Removing: /var/run/dpdk/spdk4/config 00:22:04.590 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:04.590 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:04.590 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:04.590 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:04.590 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:04.590 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:04.590 Removing: /dev/shm/nvmf_trace.0 00:22:04.590 Removing: /dev/shm/spdk_tgt_trace.pid56425 00:22:04.590 Removing: /var/run/dpdk/spdk0 00:22:04.590 Removing: /var/run/dpdk/spdk1 00:22:04.590 Removing: /var/run/dpdk/spdk2 00:22:04.590 Removing: /var/run/dpdk/spdk3 00:22:04.590 Removing: /var/run/dpdk/spdk4 00:22:04.590 Removing: /var/run/dpdk/spdk_pid56266 00:22:04.590 Removing: /var/run/dpdk/spdk_pid56425 00:22:04.590 Removing: /var/run/dpdk/spdk_pid56637 00:22:04.590 Removing: /var/run/dpdk/spdk_pid56723 00:22:04.590 Removing: /var/run/dpdk/spdk_pid56743 00:22:04.590 Removing: /var/run/dpdk/spdk_pid56853 00:22:04.590 Removing: /var/run/dpdk/spdk_pid56871 00:22:04.590 Removing: /var/run/dpdk/spdk_pid57022 00:22:04.590 Removing: /var/run/dpdk/spdk_pid57223 00:22:04.590 Removing: /var/run/dpdk/spdk_pid57378 00:22:04.590 Removing: /var/run/dpdk/spdk_pid57462 00:22:04.590 Removing: /var/run/dpdk/spdk_pid57550 00:22:04.590 Removing: /var/run/dpdk/spdk_pid57655 00:22:04.590 Removing: /var/run/dpdk/spdk_pid57733 00:22:04.590 Removing: /var/run/dpdk/spdk_pid57766 00:22:04.590 Removing: /var/run/dpdk/spdk_pid57807 00:22:04.590 Removing: /var/run/dpdk/spdk_pid57877 00:22:04.590 Removing: /var/run/dpdk/spdk_pid57958 00:22:04.590 Removing: /var/run/dpdk/spdk_pid58408 00:22:04.590 Removing: /var/run/dpdk/spdk_pid58448 00:22:04.590 Removing: /var/run/dpdk/spdk_pid58498 00:22:04.590 Removing: /var/run/dpdk/spdk_pid58507 00:22:04.590 Removing: /var/run/dpdk/spdk_pid58574 00:22:04.590 Removing: /var/run/dpdk/spdk_pid58590 00:22:04.590 Removing: /var/run/dpdk/spdk_pid58657 00:22:04.590 Removing: /var/run/dpdk/spdk_pid58671 00:22:04.590 Removing: /var/run/dpdk/spdk_pid58716 00:22:04.590 Removing: /var/run/dpdk/spdk_pid58727 00:22:04.590 Removing: /var/run/dpdk/spdk_pid58772 00:22:04.590 Removing: /var/run/dpdk/spdk_pid58790 00:22:04.590 Removing: /var/run/dpdk/spdk_pid58932 00:22:04.590 Removing: /var/run/dpdk/spdk_pid58968 00:22:04.590 Removing: /var/run/dpdk/spdk_pid59051 00:22:04.590 Removing: /var/run/dpdk/spdk_pid59401 00:22:04.590 Removing: /var/run/dpdk/spdk_pid59413 00:22:04.590 Removing: /var/run/dpdk/spdk_pid59449 00:22:04.590 Removing: /var/run/dpdk/spdk_pid59463 00:22:04.590 Removing: /var/run/dpdk/spdk_pid59484 00:22:04.590 Removing: /var/run/dpdk/spdk_pid59503 00:22:04.590 Removing: /var/run/dpdk/spdk_pid59511 00:22:04.590 Removing: /var/run/dpdk/spdk_pid59532 00:22:04.590 Removing: /var/run/dpdk/spdk_pid59551 00:22:04.591 Removing: /var/run/dpdk/spdk_pid59570 00:22:04.591 Removing: /var/run/dpdk/spdk_pid59580 00:22:04.591 Removing: /var/run/dpdk/spdk_pid59605 00:22:04.591 Removing: /var/run/dpdk/spdk_pid59618 00:22:04.591 Removing: /var/run/dpdk/spdk_pid59639 00:22:04.591 Removing: /var/run/dpdk/spdk_pid59658 00:22:04.591 Removing: /var/run/dpdk/spdk_pid59666 00:22:04.591 Removing: /var/run/dpdk/spdk_pid59687 00:22:04.591 Removing: /var/run/dpdk/spdk_pid59706 00:22:04.591 Removing: /var/run/dpdk/spdk_pid59725 00:22:04.849 Removing: /var/run/dpdk/spdk_pid59735 00:22:04.849 Removing: /var/run/dpdk/spdk_pid59771 00:22:04.849 Removing: /var/run/dpdk/spdk_pid59785 00:22:04.849 Removing: /var/run/dpdk/spdk_pid59814 00:22:04.849 Removing: /var/run/dpdk/spdk_pid59892 00:22:04.849 Removing: /var/run/dpdk/spdk_pid59927 00:22:04.849 Removing: /var/run/dpdk/spdk_pid59932 00:22:04.849 Removing: /var/run/dpdk/spdk_pid59966 00:22:04.849 Removing: /var/run/dpdk/spdk_pid59976 00:22:04.849 Removing: /var/run/dpdk/spdk_pid59983 00:22:04.849 Removing: /var/run/dpdk/spdk_pid60027 00:22:04.849 Removing: /var/run/dpdk/spdk_pid60040 00:22:04.849 Removing: /var/run/dpdk/spdk_pid60069 00:22:04.849 Removing: /var/run/dpdk/spdk_pid60078 00:22:04.849 Removing: /var/run/dpdk/spdk_pid60089 00:22:04.849 Removing: /var/run/dpdk/spdk_pid60097 00:22:04.849 Removing: /var/run/dpdk/spdk_pid60112 00:22:04.849 Removing: /var/run/dpdk/spdk_pid60122 00:22:04.849 Removing: /var/run/dpdk/spdk_pid60133 00:22:04.849 Removing: /var/run/dpdk/spdk_pid60143 00:22:04.849 Removing: /var/run/dpdk/spdk_pid60171 00:22:04.849 Removing: /var/run/dpdk/spdk_pid60198 00:22:04.849 Removing: /var/run/dpdk/spdk_pid60207 00:22:04.849 Removing: /var/run/dpdk/spdk_pid60236 00:22:04.849 Removing: /var/run/dpdk/spdk_pid60245 00:22:04.849 Removing: /var/run/dpdk/spdk_pid60253 00:22:04.849 Removing: /var/run/dpdk/spdk_pid60299 00:22:04.849 Removing: /var/run/dpdk/spdk_pid60306 00:22:04.849 Removing: /var/run/dpdk/spdk_pid60337 00:22:04.850 Removing: /var/run/dpdk/spdk_pid60346 00:22:04.850 Removing: /var/run/dpdk/spdk_pid60354 00:22:04.850 Removing: /var/run/dpdk/spdk_pid60361 00:22:04.850 Removing: /var/run/dpdk/spdk_pid60369 00:22:04.850 Removing: /var/run/dpdk/spdk_pid60382 00:22:04.850 Removing: /var/run/dpdk/spdk_pid60384 00:22:04.850 Removing: /var/run/dpdk/spdk_pid60397 00:22:04.850 Removing: /var/run/dpdk/spdk_pid60485 00:22:04.850 Removing: /var/run/dpdk/spdk_pid60527 00:22:04.850 Removing: /var/run/dpdk/spdk_pid60651 00:22:04.850 Removing: /var/run/dpdk/spdk_pid60690 00:22:04.850 Removing: /var/run/dpdk/spdk_pid60735 00:22:04.850 Removing: /var/run/dpdk/spdk_pid60744 00:22:04.850 Removing: /var/run/dpdk/spdk_pid60766 00:22:04.850 Removing: /var/run/dpdk/spdk_pid60786 00:22:04.850 Removing: /var/run/dpdk/spdk_pid60822 00:22:04.850 Removing: /var/run/dpdk/spdk_pid60833 00:22:04.850 Removing: /var/run/dpdk/spdk_pid60917 00:22:04.850 Removing: /var/run/dpdk/spdk_pid60943 00:22:04.850 Removing: /var/run/dpdk/spdk_pid60988 00:22:04.850 Removing: /var/run/dpdk/spdk_pid61057 00:22:04.850 Removing: /var/run/dpdk/spdk_pid61113 00:22:04.850 Removing: /var/run/dpdk/spdk_pid61146 00:22:04.850 Removing: /var/run/dpdk/spdk_pid61250 00:22:04.850 Removing: /var/run/dpdk/spdk_pid61298 00:22:04.850 Removing: /var/run/dpdk/spdk_pid61330 00:22:04.850 Removing: /var/run/dpdk/spdk_pid61563 00:22:04.850 Removing: /var/run/dpdk/spdk_pid61666 00:22:04.850 Removing: /var/run/dpdk/spdk_pid61689 00:22:04.850 Removing: /var/run/dpdk/spdk_pid61724 00:22:04.850 Removing: /var/run/dpdk/spdk_pid61752 00:22:04.850 Removing: /var/run/dpdk/spdk_pid61791 00:22:04.850 Removing: /var/run/dpdk/spdk_pid61824 00:22:04.850 Removing: /var/run/dpdk/spdk_pid61856 00:22:04.850 Removing: /var/run/dpdk/spdk_pid62273 00:22:04.850 Removing: /var/run/dpdk/spdk_pid62302 00:22:04.850 Removing: /var/run/dpdk/spdk_pid62652 00:22:04.850 Removing: /var/run/dpdk/spdk_pid63125 00:22:04.850 Removing: /var/run/dpdk/spdk_pid63395 00:22:04.850 Removing: /var/run/dpdk/spdk_pid64258 00:22:04.850 Removing: /var/run/dpdk/spdk_pid65191 00:22:04.850 Removing: /var/run/dpdk/spdk_pid65314 00:22:04.850 Removing: /var/run/dpdk/spdk_pid65380 00:22:04.850 Removing: /var/run/dpdk/spdk_pid66846 00:22:04.850 Removing: /var/run/dpdk/spdk_pid67170 00:22:04.850 Removing: /var/run/dpdk/spdk_pid70971 00:22:04.850 Removing: /var/run/dpdk/spdk_pid71345 00:22:04.850 Removing: /var/run/dpdk/spdk_pid71454 00:22:04.850 Removing: /var/run/dpdk/spdk_pid71581 00:22:04.850 Removing: /var/run/dpdk/spdk_pid71608 00:22:04.850 Removing: /var/run/dpdk/spdk_pid71639 00:22:04.850 Removing: /var/run/dpdk/spdk_pid71668 00:22:04.850 Removing: /var/run/dpdk/spdk_pid71766 00:22:04.850 Removing: /var/run/dpdk/spdk_pid71894 00:22:04.850 Removing: /var/run/dpdk/spdk_pid72043 00:22:04.850 Removing: /var/run/dpdk/spdk_pid72130 00:22:04.850 Removing: /var/run/dpdk/spdk_pid72324 00:22:04.850 Removing: /var/run/dpdk/spdk_pid72401 00:22:05.109 Removing: /var/run/dpdk/spdk_pid72486 00:22:05.109 Removing: /var/run/dpdk/spdk_pid72856 00:22:05.109 Removing: /var/run/dpdk/spdk_pid73277 00:22:05.109 Removing: /var/run/dpdk/spdk_pid73278 00:22:05.109 Removing: /var/run/dpdk/spdk_pid73279 00:22:05.109 Removing: /var/run/dpdk/spdk_pid73540 00:22:05.109 Removing: /var/run/dpdk/spdk_pid73805 00:22:05.109 Removing: /var/run/dpdk/spdk_pid74209 00:22:05.109 Removing: /var/run/dpdk/spdk_pid74211 00:22:05.109 Removing: /var/run/dpdk/spdk_pid74548 00:22:05.109 Removing: /var/run/dpdk/spdk_pid74569 00:22:05.109 Removing: /var/run/dpdk/spdk_pid74583 00:22:05.109 Removing: /var/run/dpdk/spdk_pid74610 00:22:05.109 Removing: /var/run/dpdk/spdk_pid74621 00:22:05.109 Removing: /var/run/dpdk/spdk_pid74985 00:22:05.109 Removing: /var/run/dpdk/spdk_pid75029 00:22:05.109 Removing: /var/run/dpdk/spdk_pid75372 00:22:05.109 Removing: /var/run/dpdk/spdk_pid75575 00:22:05.109 Removing: /var/run/dpdk/spdk_pid76010 00:22:05.109 Removing: /var/run/dpdk/spdk_pid76571 00:22:05.109 Removing: /var/run/dpdk/spdk_pid77452 00:22:05.109 Removing: /var/run/dpdk/spdk_pid78092 00:22:05.109 Removing: /var/run/dpdk/spdk_pid78094 00:22:05.109 Removing: /var/run/dpdk/spdk_pid80141 00:22:05.109 Removing: /var/run/dpdk/spdk_pid80188 00:22:05.109 Removing: /var/run/dpdk/spdk_pid80241 00:22:05.109 Removing: /var/run/dpdk/spdk_pid80295 00:22:05.109 Removing: /var/run/dpdk/spdk_pid80395 00:22:05.109 Removing: /var/run/dpdk/spdk_pid80448 00:22:05.109 Removing: /var/run/dpdk/spdk_pid80508 00:22:05.109 Removing: /var/run/dpdk/spdk_pid80555 00:22:05.109 Removing: /var/run/dpdk/spdk_pid80927 00:22:05.109 Removing: /var/run/dpdk/spdk_pid82152 00:22:05.109 Removing: /var/run/dpdk/spdk_pid82285 00:22:05.109 Removing: /var/run/dpdk/spdk_pid82521 00:22:05.109 Removing: /var/run/dpdk/spdk_pid83131 00:22:05.109 Removing: /var/run/dpdk/spdk_pid83291 00:22:05.109 Removing: /var/run/dpdk/spdk_pid83448 00:22:05.109 Removing: /var/run/dpdk/spdk_pid83545 00:22:05.109 Removing: /var/run/dpdk/spdk_pid83704 00:22:05.109 Removing: /var/run/dpdk/spdk_pid83813 00:22:05.109 Removing: /var/run/dpdk/spdk_pid84525 00:22:05.109 Removing: /var/run/dpdk/spdk_pid84562 00:22:05.109 Removing: /var/run/dpdk/spdk_pid84597 00:22:05.109 Removing: /var/run/dpdk/spdk_pid84853 00:22:05.109 Removing: /var/run/dpdk/spdk_pid84888 00:22:05.109 Removing: /var/run/dpdk/spdk_pid84918 00:22:05.109 Removing: /var/run/dpdk/spdk_pid85400 00:22:05.109 Removing: /var/run/dpdk/spdk_pid85416 00:22:05.109 Removing: /var/run/dpdk/spdk_pid85679 00:22:05.109 Removing: /var/run/dpdk/spdk_pid85812 00:22:05.109 Removing: /var/run/dpdk/spdk_pid85827 00:22:05.109 Clean 00:22:05.109 08:34:52 -- common/autotest_common.sh@1441 -- # return 0 00:22:05.109 08:34:52 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:22:05.109 08:34:52 -- common/autotest_common.sh@735 -- # xtrace_disable 00:22:05.109 08:34:52 -- common/autotest_common.sh@10 -- # set +x 00:22:05.109 08:34:52 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:22:05.109 08:34:52 -- common/autotest_common.sh@735 -- # xtrace_disable 00:22:05.109 08:34:52 -- common/autotest_common.sh@10 -- # set +x 00:22:05.369 08:34:52 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:05.369 08:34:52 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:05.369 08:34:52 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:05.369 08:34:52 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:22:05.369 08:34:52 -- spdk/autotest.sh@398 -- # hostname 00:22:05.369 08:34:52 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:05.628 geninfo: WARNING: invalid characters removed from testname! 00:22:37.814 08:35:20 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:37.815 08:35:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:40.350 08:35:27 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:43.638 08:35:30 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:46.924 08:35:33 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:49.457 08:35:36 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:52.757 08:35:39 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:52.757 08:35:39 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:52.757 08:35:39 -- common/autotest_common.sh@741 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:52.757 08:35:39 -- common/autotest_common.sh@743 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:52.757 08:35:39 -- common/autotest_common.sh@744 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:52.757 08:35:39 -- common/autotest_common.sh@747 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:52.757 + [[ -n 5199 ]] 00:22:52.757 + sudo kill 5199 00:22:52.776 [Pipeline] } 00:22:52.793 [Pipeline] // timeout 00:22:52.798 [Pipeline] } 00:22:52.813 [Pipeline] // stage 00:22:52.819 [Pipeline] } 00:22:52.836 [Pipeline] // catchError 00:22:52.845 [Pipeline] stage 00:22:52.847 [Pipeline] { (Stop VM) 00:22:52.861 [Pipeline] sh 00:22:53.146 + vagrant halt 00:22:57.339 ==> default: Halting domain... 00:23:02.619 [Pipeline] sh 00:23:02.898 + vagrant destroy -f 00:23:07.092 ==> default: Removing domain... 00:23:07.103 [Pipeline] sh 00:23:07.384 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:23:07.394 [Pipeline] } 00:23:07.409 [Pipeline] // stage 00:23:07.415 [Pipeline] } 00:23:07.430 [Pipeline] // dir 00:23:07.435 [Pipeline] } 00:23:07.450 [Pipeline] // wrap 00:23:07.456 [Pipeline] } 00:23:07.469 [Pipeline] // catchError 00:23:07.479 [Pipeline] stage 00:23:07.481 [Pipeline] { (Epilogue) 00:23:07.495 [Pipeline] sh 00:23:07.777 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:15.902 [Pipeline] catchError 00:23:15.904 [Pipeline] { 00:23:15.916 [Pipeline] sh 00:23:16.198 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:16.198 Artifacts sizes are good 00:23:16.208 [Pipeline] } 00:23:16.222 [Pipeline] // catchError 00:23:16.234 [Pipeline] archiveArtifacts 00:23:16.241 Archiving artifacts 00:23:16.367 [Pipeline] cleanWs 00:23:16.378 [WS-CLEANUP] Deleting project workspace... 00:23:16.379 [WS-CLEANUP] Deferred wipeout is used... 00:23:16.385 [WS-CLEANUP] done 00:23:16.387 [Pipeline] } 00:23:16.402 [Pipeline] // stage 00:23:16.407 [Pipeline] } 00:23:16.421 [Pipeline] // node 00:23:16.426 [Pipeline] End of Pipeline 00:23:16.466 Finished: SUCCESS